# Full text of "Common derivations in locally determined nonmonotonic rule systems and their complexity"

## See other formats

```COMMON DERIVATIONS IN LOCALLY DETERMINED
NONMONOTONIC RULE SYSTEMS AND THEIR COMPLEXITY

By

AMY K. C. S. VANDERBILT

A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL

OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT

OF THE REQUIREMENTS FOR THE DEGREE OF

DOCTOR OF PHILOSOPHY

UNIVERSITY OF FLORIDA

2000

I dedicate this dissertation to the memory of Sara Davidson Sartain, who put
her foot down. Most importantly, I dedicate it to Scott and Sabrina, for whom I
would accomplish anything and with whom I could accomplish anything.

ACKNOWLEDGMENTS

I would like to thank the University of Florida and the Department of Math-
ematics for their encouragement and contributions to my education, experience, and
funding. Special and heart-felt thanks go to my committee, who from the first day
were supportive and helpful above and beyond the call of duty. Thanks to my parents,
Ronald and Linda Sartain, without whom I might never have taken my education this
far and without whom I might never have gone to conferences! An enormous thanks
to my advisor, Dr. Douglas A. Cenzer, who not only put up with me for these years,
but encouraged my tenacious approach and worked to help me achieve my goals, both
for this dissertation and for the years beyond. Lastly, the greatest thanks go to my
husband, Scott, for constant encouragement and support (and chocolate pie!), and
to my daughter, Sabrina, who (although not around for the entire undertaking) was
there with daily hugs and smiles and reminders of what life is really about.

in

pa g e

ACKNOWLEDGEMENTS iii

ABSTRACT vi

1 INTRODUCTION 1

1.1 History 1

1 .2 Motivation 3

1 .3 Preliminary Theorems and Definitions: Nonmonotonic Rule

Systems 5

1 .4 Common Derivations in Locally Determined Systems 16

2 THE FINITE CASE 18

2.1 Finite Classical Nonmonotonic Rule Systems 18

2.2 Finite Constructive Nonmonotonic Rule Systems 32

2.3 Characterizing the Set of Extensions 36

3 THE INFINITE CASE 38

3 . 1 Locally Finite Classical Nonmonotonic Rule Systems 38

3 .2 Locally Finite Constructive Nonmonotonic Rule Systems 51

3.3 Characterizing The Set of Extensions 59

4 LOCALLY DETERMINED NONMONOTONIC RULE SYSTEMS 60

4. 1 Locally Determined Nonmonotonic Rule Systems 60

4.2 Common Derivations in Locally Determined Nonmonotonic

Rule Systems 67

4.3 Characterizing the Set of Extensions 69

5 COMPUT ABILITY AND COMPLEXITY ISSUES 76

5.1 Finding Extensions with Low Complexity 76

5.2 Computability and Complexity of Common Derivations 82

IV

6 ALTERNATE FORMALISMS OF NONMONOTONIC LOGIC 86

6.1 Default Logic: Preliminary Definitions and Theorems 86

6.2 Equivalence of Default Logic to Nonmonotonic Rule Systems 90

6.3 Previous Results Through the Eyes of Default Logic 91

6.4 Logic Programming: Preliminary Definitions and Theorems 114

6.5 Equivalence of Logic Programming to Nonmonotonic

Rule Systems 120

6.6 Previous Results Through the Eyes of Logic Programming 121

7 FUTURE DIRECTIONS 131

REFERENCES 133

BIOGRAPHICAL SKETCH 137

Abstract of Dissertation Presented to the Graduate School

of the University of Florida in Partial Fulfillment of the

Requirements for the Degree of Doctor of Philosophy

COMMON DERIVATIONS IN LOCALLY DETERMINED
NONMONOTONIC RULE SYSTEMS AND THEIR COMPLEXITY

By

Amy K. C. S. Vanderbilt

May 2000

Chairman: Douglas Cenzer
Major Department: Mathematics

Nonmonotonic Rule Systems are a general framework for commonsense rea-
soning. U =< U,N > is a nonmonotonic rule system where the universe U is a
countable set, such as the set of sentences of some propositional logic; N is a set of
rules of inference under which a conclusion from U may be inferred from the pres-
ence of premises and the absence of restraints. An extension of U is a set of beliefs
concurrent with the rules of inference. If <j> is a formula of the language, appearing
in every extension of < U, N >, then <\> has a common derivation d 1 ^ that gener-
ates (j) in every extension. Further, every extension of < U, N > is an extension of
< U, N U {d l <p} >. These two sets of extensions may be equal, but this is not al-
ways true. A constructive view enhances as well as simplifies the results. We explore
alternate forms of the common derivation d that ensure certain properties for the re-
sulting nonmonotonic rule system < U, N U {d} > . We then introduce the notion of
levels and locally determined systems. The previous questions are explored in terms

VI

of these systems. Computability and complexity issues of the common derivations
are considered. Several conditions are detailed, based on the notion of levels, which
ensure that a nonmonotonic rule system will have a computable extension. These
results are refined to give conditions which ensure the existence of extensions of low
complexity (such as exponential time). If U is a recursive nonmonotonic rule system
with (effective) levels, there is a 1:1 degree preserving correspondence between the set
of extensions of U and the set of paths through a recursive finitely branching (highly
recursive) tree. The families of extensions of a nonmonotonic rule system with levels
are characterized. We find that for any closed (Ilf) family S of subsets of U having
levels, there exists a (recursive) nonmonotonic rule system U with (recursive) levels
such that S is the set of extensions of U.

vii

CHAPTER 1
INTRODUCTION

In this chapter, we provide a motivation for the directions of the research as
well as necessary background definitions and preliminary theorems. Along with this,
we take a short exploration of the history of the subject and of the directions research
in the field is presently taking.

1.1 History

In mathematics, a conclusion drawn from a set of axioms can also be drawn
from any larger set of axioms containing the original set. The deduction remains
no matter how the axioms are increased. This is monotonic reasoning in that new
information does not affect the conclusions drawn from a set of premises. From the
time of Euclid, it has been the nature of this monotonic reasoning that makes math-
ematical proofs permanent. Complete proofs of theorems are never later withdrawn
due to new information.

There is, however, another sort of reasoning that is not monotonic. In this
type of reasoning, we deduce a statement based on the absence of any evidence to
the contrary. Such a statement is therefore more of a belief than a truth, as we may
later come upon new information that contradicts the statement. Statistics is often
used as a tool for for deducing provisional beliefs, but not every problem comes with
its own set of distributions ready for use.

Such deductions may arise in situations where we are made to choose con-
cretely between two actions in the absence of complete information. It may be that

we cannot wait for the complete information to show itself, or that we have no guar-
antee that it will ever show itself. Thus beliefs are often accepted as truth based on
a lack of contradicting facts.

One of the standard illustrations of this commonsense reasoning is the "Tweety"
example. If we see only birds that can fly, then we deduce that birds can fly. If Tweety
is a bird, we conclude that Tweety can fly. If we later find that Tweety is a penguin,
then we are forced to retract our conclusion that Tweety can fly. We are back to
knowing only that Tweety is a bird, specifically a penguin.

Every vision of a nonmonotonic logic describing belief will be similar, depend-
ing only on our definition of a lack of evidence against a conclusion. McCarthy [1980]
was one of several who initiated formalizations of nonmonotonic logic with his concept
of Circumscription. At the same time, Reiter [Rei80] created a formalization that
he termed Default Logic. Along with these are the Theory of Multiple Believers of
Hintikka (1962), the Truth Maintenance Systems of Doyle [Do79], the Autoepistemic
Logic of Moore (1985), the Theory of Individual and Common Knowledge and Belief
of Halpern and Moses (1984), and Logic Programming with Negation as Failure given
by Apt (1988). This is only a partial list. The first journeys into the nonmonotonic
aspects of logic can be traced to Reiter's Default Logic [Rei80]. This involved creating
a natural extension of classical logic that would easily handle negative information.

These nonmonotonic logics share many properties. Several translations be-
tween them have been made by Konolige (1988), Gelfond and Przymusihska (1986,
1989), Reinfrank and Dressier (1989), and Marek and Truszczyhski (1989). It should
be noted that these translations are primarily for propositional logic.

Apt [A90], and Gelfond and Lifschitz [GL88] studied negation as failure in
logic programming. It has since been seen that each of these investigations were
in a common direction. Relationships were discovered by several, including Marek

and Trusczcynski (1989), who explored the precise nature of the connections between
Default Logic and Logic Programming.

Since then, many aspects of the computability and complexity of the exten-
sions of these systems have been explored. Subsequently, algorithms for computing
the extensions of any one theory have been created. More importantly, a universal
system of nonmonotonic logic called Nonmonotonic Rule Systems was created by
Marek, Nerode, and Remmel [MNR90 and MNR92a]. This system has been shown
by Marek, Nerode, and Remmel and others to be equivalent to many other systems of
nonmonotonic logic including Reiter's Default Logic as well as Logic Programming,
and Modal Nonmonotonic Logic.

1.2 Motivation

We suppose that we have a sentence (j) appearing in some or all extensions
of a nonmonotonic rule system. We want to then construct a single rule that would
derive 4> in every extension of the system. The motivation behind this "common
derivation" is that it would tell us what is required, in a nutshell, to derive <j> in
this system. Nonmonotonic Logic has a place in fields such as medicine and political
science where this kind of information would be useful.

In the world of medical diagnoses, we imagine that our system involves a
universe U of symptoms to be matched with a diagnosis. We would care to know
a concise list of what symptoms should be present and, just as importantly, what
symptoms should be absent in order to accurately diagnose a condition.

My ultimate goal is to manipulate Nonmonotonic Rule Systems to create a
mathematical system which would lend itself to applications in political science (and
possibly social science as a whole as well as advertising) and explore the subsequent
theory of that system.

First Order Logic is inherently monotonic. Human reasoning, however, in-
volves not only the addition of beliefs, but the retraction of beliefs, based on new
information, i.e. it is nonmonotonic. For example, the study of Astronomy led to the
retraction of the belief of a geocentric universe. Thus, to formalize human reasoning,
a nonmonotonic logic is needed. I propose to consider Political Science as an area of
future application. This area is ripe for the use of Nonmonotonic Logic since Political
Science studies how groups of people react to a campaign and how they reason who
they will vote for. If human reasoning can be formalized, then the reasoning of the
electorate can be modeled.

Other types of mathematics used to model the electorate in the past include
statistics, game theory [Gol94] and measure theory [B90]. Some of these models
managed to get as accurate as 84 percent [Gol94], but had to be redone every time a
subset of the electorate changed its mind (which was often!). Secondly, the accuracy
of the model was only known after the election was already over. These two properties
made the models unreliable and expensive. The problem: these types of mathematics
are monotonic, so the conclusions they reach cannot be retracted upon receipt of new
information.

We imagine that we create one nonmonotonic rule system per category of
voters and we consider the extensions of each theory or system. We would like all
the extensions for a category to contain some conclusion such as, "I will vote for
candidate A." For this reason, we take great interest not only in the intersections of
extensions, but also in the process of deduction used to create those extensions. It is
for this reason that we explore the concept of a common derivation for a conclusion
that appears in some or all extensions. Having one rule that derives the conclusion
"I will vote for candidate A" in each extension allows us to see exactly what must be
concluded previously and what must not have been concluded in order to allow the
derivation to apply.

1.3 Preliminary Theorems and Definitions: Nonmonotonic Rule Systems

The following background is taken from Marek, Nerode, and Remmel's series
of papers [MNR90] and [MNR92a]. Here we introduce the notion of a nonmonotonic
formal system < U, N >.

Definition 1.3.1 A NONMONOTONIC RULE OF INFERENCE is a triple <P,R,d>>,
where P = {a>i,...,a n }, and R = {/?i,...,/? m } are finite lists of objects from U and
4> G U. Each such rule is written in the form r = a\,...,a n : /3 1} ...,/3 m /0. Here
{a\,...,a n } are called the PREMISES of the rule r, {/?i,...,/? m } are called the RE-
STRAINTS of the rule r, and cln(r) = c{r) = </> is the CONCLUSION ofr.

Either P or R or both may be empty. If R is empty, then the rule is monotonic.
In the case that both P and R are empty, we call the rule r an axiom. For a set A
of rules, let c(A) = {c(r)\r e A}.

A NONMONOTONIC FORMAL SYSTEM is a pair <U,N>, where U is a
nonempty set and N is a set of nonmonotonic rules. Each monotonic formal system
can be identified with the nonmonotonic formal system in which every monotonic
rule is given an empty set of restraints. The standard example for U will be the
set Sent(V) of propositional sentences on a finite or countable set of propositional
variables (or atoms). Here we will frequently assume that the standard propositional
rules of deduction are implicitly included as monotonic rules in any nonmonotonic
rule system with universe U. In the constructive case, a proper subset of these rules
are assumed.

Now, if < U, N > is a nonmonotonic rule system, and 5 is a set of formulas
of the language, call a rule r <E N GENERATING FOR THE CONTEXT S if the
premises of r are in S, and no element of the restraints is in S. Let GD(N, S) be the
set of all rules in N that generate for the context S.

6

A subset S C U is called DEDUCTIVELY CLOSED if for every rule r £ N,
we have that if all premises {ai,...,a n } are in S and all restraints {/?i,..., /3 m } are not
in S then the conclusion belongs to 5.

In nonmonotonic rule systems, deductively closed sets are not generally closed
under arbitrary intersections as in the monotone case. But deductively closed sets are
closed under intersections of descending chains. Since U is deductively closed, by the
Kuratowski-Zorn Lemma, any / C U is contained in at least one minimal deductively
closed set. The intersection of all the deductively closed sets containing / is called
the set of SECURED CONSEQUENCES of /. This set is also the intersection of all
minimal deductively closed sets containing /. Deductively closed sets are thought of
as representing possible "points of view." the intersection of all deductively closed
sets containing / represents the common information present in all such "points of
view" containing /.

Example 1.3.1 Let U = {a,b,c}.

(a) Consider U with Ni = {: /a; a : — >6/6} . There is only one minimal
deductively closed set S — {a, b}. Then S is the set of secured consequences of

<U,N X >.

(b) Consider U with N 2 = {: / o; a : b/c; a : c/b). Then there are two minimal
deductively closed sets, S x = {a,b} and S 2 = {a,c}. The singleton set {a} is the set
of secured consequences of < U, N 2 > .

Part (b) of this example shows that the set of all secured consequences is not,
in general, deductively closed in the nonmonotone case. Note that if we implicitly
assume the rules of propositional logic we define Cn(S) to be the closure of the set
S under these implicit monotonic rules

Given a set S and an / C U, and 5-DERIVATION of </> from / in the system
< U, N > is a finite sequence < 0!,...,^ > such that <f> k = cf> and, for all i < k,

each 0i is in /, or is an axiom, or is the conclusion of a rule r e N such that all the
premises of r are included in {(fii,..., <j>i-\) and all restraints of r are in U — S. An S-
CONSEQUENCE of I is an element of U occurring in some 5-deduction from /. Let
Cs{I) be the set of all S-consequences of / in < U, N >. I is a subset of Cs(I). Note
that S enters solely as a restraint on the use of the rules imposed by the restraints
in the rules. A single restraint in a rule in N may be in S and therefore prevent the
rule from ever being applied in an 5-deduction from /, even though all the premises
of that rule occur earlier in the deduction. Thus, S contributes no members directly
to Cs(I), although members of S may turn up in Cs{I) by an application of a rule
which happens to have its conclusion in S. For a fixed S, the operator Cs(*) is
monotonic. That is, if IC J, then C S (I) C C S (J). Also, C S (C S {I)) = C S (I).

Generally, Cs(I) is not deductively closed in < U, N >. It is possible that all
the premises of a rule be in Cs(I), the restraints of that rule are outside of Cs(I),
but a restraint of that rule be in 5, preventing the conclusion from being put into
Cs(I).

Example 1.3.2 Let U = {a,b,c},N = {: /a;a: b/c}, and S = {b}. Then, C s (0) ■
{a} is not deductively closed.

However, we do get the following result:
Theorem 1.3.2 (MNR90) If S C C S {I) then C S (I) is deductively closed.

We say that S C U is an EXTENSION of / if C S (I) = S.

S is an extension of / if two things happen. First, every element of S is
deducible from /, that is, S C C S (I) (this is the analogue of the adequacy property
in logical calculi). Second, the converse holds: all the 5-consequences of / belong to
S (this is the analogue of completeness).

In purely monotonic rule systems, extensions usually refer to larger theories
(under inclusion). This is definitely not the meaning of extension for nonmonotonic
rule systems. In fact, it will be seen that for any two extensions A and B of a
nonmonotonic rule system, it is never the case that A C B. An important family in
propositional logic is the set of complete consistent extensions, CCE(T), of a theory
T. Any such family can be obtained as the set of extensions of a nonmonotonic
rule system. We obtain this system by taking the monotonic rules as restraint-free
nonmonotonic rules and adding rules of the form : (j>/-<<j) and : ->0/</>, for each sentence
4> (to secure completeness).

The notion of an extension is related to that of a minimal deductively closed
set.

Theorem 1.3.3 (MNR90) // S is an extension of I, then: (1) S is a minimal
deductively closed superset of I. (2) For every V such that I C /' C S, Cs(I') = S.

Example 1.3.3 Let a, b, and c be atoms in the language L and let

U = {a,b,c}

and

iV = {:{a}/6,:{c}/6,:{a}/c,:{c}/a}.

Then the nonmonotonic rule system < U,N > has two extensions:

5i = {6,c}

and

Moreover, we find that

S 2 = {b,a}.

GD(N,S 1 ) = {:{a}/b,:{a}/c}

and

GD(N,S 2 ) = {:{c}/b,:{c}/a}.

Now, we may well-order the rules of a nonmonotonic rule system < U, N > by
some well-ordering < We may then define AD^. to be the set of all rules in TV which
are applied when the well-ordering is used to close under the set of rules. This is
done in the following way: we define an ordinal tfc. For every e < 77^ we define a set
of rules AD £ and a rule r e . If the sets AD £ , e < a, have been defined but 77^ has not
been defined, then a rule r is applicable at stage a if the following two conditions
hold:

(*)c(U t<a AD t )\-P(r)

( b ) c(U £ <a AD e ) \f (3, for every € R(r).

(1) If there is no applicable rule r £ N\ [) £<a AD £ , then r^ = a and AD Q =

(2) Otherwise, define r Q to be the ^-least applicable rule r e N\ \J £<Q AD e
and set AD a = (|J e<a AD,) U{r a }.

(3) Put AD± = U £< ^ AD S .

Intuitively, we begin with the empty set of formulas and apply the rules of TV
in their order. Each time we apply only the ordering-least rule which can be applied.
At some point (since the ordinal is well-defined [MT93a]) there will be no available
rules that may be applied. At this point, AD± is the set of all rules applied and r^
is the number of steps needed to reach this stopping point.

Then, let T± be c(AD^). This is the theory GENERATED BY < Then,
GD(N,T^) CAD± so that

c(GD(N,T±))CT±.

10

Now, if X is a well-ordering of the rules and for every /? in R(AD^), (5 £ c(AD^)
then T-< is an extension of < U, N >. That is, if

then

is an extension of < U, N >. More precisely, if 71< = c(GD(N,T±)), then T< is an
extension of < U, N > . We now have that

(a) [MT93a] If S is an extension of < U, N > then there is some well-ordering
^ of the rules in N such that

S = c(AD±) = c{GD{N,S)).

And,

(b) US = T^ = c(AD^) = c(GD(N,S)) for some well-ordering <, then S is
an extension of < U, N > .

Thus, S is an extension of < U, N > if and only if S = c(GD(N, S)).

It is important to note that a well-ordering may not give rise to an extension.
The new rule r Q may have a conclusion that contradicts the restraint of a previously
applied rule. Then, T-< is not an extension. Consider the following example:

Example 1.3.4 Let < U,N > be the nonmonotonic rule system where U = {a,b}
and N = {: {a}/b; b : /a}. Then rule r x is applicable at stage one. At stage two, rule
r 2 is applicable; however, the conclusion of r 2 violates the restraint of r l . Thus, this
system has no extensions.

Thus, we must concede that if c((J £<Q AD e ) h (3, where /? € R(r) for some
rule r € [j e<a AD e , then the well-ordering does not define an extension.

11

Let £(< U,N >) be the set of all extensions of the nonmonotonic rule system
<U,N >. Call two nonmonotonic rule systems <U\,Ni> and < U 2 , N 2 > EQUIV-
ALENT, written < Ui,N\ >=< U 2 , N 2 >, if they have exactly the same extensions,
i.e., if £{< U u Ni >) = £(< U 2 ,N 2 >) [MT93a].

Example 1.3.5 Let < U,N > be a nonmonotonic rule system where U = {a, b},
and N\ = {: /a;a : /b}. Then consider the nonmonotonic rule system < U, N 2 >
where N 2 = {: /b;b : /a). These two theories are equivalent as they each have the
same single extension

S = {a,b}.

Theorem 1.3.4 (MT93) A nonmonotonic rule system <U,N > has an inconsis-
tent extension if and only if Sent(L) is an extension and < U,N > has no other
extensions.

Theorem 1.3.5 (MNR90) The set of extensions of a nonmonotonic rule system
forms an antichain. That is, if S\ , S 2 are extensions of a nonmonotonic rule system
<U,N > and Si C S 2 , then S x = S 2 .

Though this is a widely known fact about all forms of nonmonotonic logic,
this is an important concept for the characterization of the family of extensions of a
nonmonotonic rule system The natural question here is under what conditions is an
antichain the set of extensions of a nonmonotonic rule system? Consider the following
example:

Example 1.3.6 Considering the monotonic theory T = Cn(a — ► b) and the set
V = {a, 6} of atoms. The set of complete consistent extensions of this monotonic

12

theory can be seen as the set of extensions of the nonmonotonic rule system < V, N >
where N = {: ->a/a; : a/->a; : — >6/6; : b/-<b; a : /b}. This system has three extensions:

S 1 = Gn({a,6}),

S 2 = Cn({^a,b}),

and

S 3 = Cn({^a,^b}).

It will not always be so easy to construct a nonmonotonic rule system whose
extensions are exactly what we want them to be. We address this in detatil in
subsequent chapters.

Next we need to define the notions of recursive and highly recursive non-
monotonic rule systems < U,N >. Without loss of generality, we may assume that
U C u and we will identify a rule r = ai,...,a n : /? lv ..,/? m /0 in N with its code
c(r) =< k,l,<j>> where D k = {a u ...,a n } and D t = {/?i,...,/J m }. In this way, we can
think of iV as a subset of a;. We say that a nonmonotonic rule system < U, N > is
RECURSIVE if both U and N are recursive subsets of w. To define the notion of
a highly recursive nonmonotonic rule system < U, N >, we must first introduce the
concept of a PROOF SCHEME for in < U, N >.

A proof scheme for is a finite sequence p =<< <j> Q ,ro,can(G ) >,...,<
^m,?"m,con(G m ) >> such that <f> m = <f> and

(1) If m = then: (a) O is an axiom (that is, there exists a rule r g N,r =:
M>), r o = r > and G = or (b) is a conclusion of a rule r =: 0t,..., /3 T , r = r, and
Go = {A, •.,&}•

(2) m > 0, << o ,r o ,can(Go) >,...,< <^ m _i,r m _ 1 ,can(G m _ 1 ) >> is a proof
scheme of length m and <j> m is a conclusion of r = ^ io ,...,^ : p u ..., /3 r /(t) m where
to,..., i s <m,r m = r, and G m = G m _i U {A,..., /? r }.

13

The formula m is called the CONCLUSION of the proof scheme p and denoted
by cln(p), the set G m is called the SUPPORT of p and is denoted by supp(p).

The idea behind this concept is this: any 5-derivation, p, in the system U =<
U,N >, uses some negative information about S to ensure that the restraints of rules
that were used are outside of S. But this negative information is finite, that is, it
involves a finte subset of the complement of 5, such that as long as G fl S\ = 0, p
is an Si-derivation as well. In the notion of proof scheme we capture this finitary
character of the 5-derivation.

A proof scheme with the conclusion </> may include a number of rules that are
irrelevant to the enterprise of deriving <f>. There is a natural preordering on proof
schemes. Namely, we say that p < p\ if every rule appearing in p also appears in p x .
The relation is not a partial ordering, and it is not a partial ordering if we restrict
ourselves to proof schemes with a fixed conclusion </>. Yet it is a well-founded relation,
namely, for every proof scheme p there exists a MINIMAL PROOF SCHEME p\ <p
[e.g. q is minimal if for every p 2 , if p<i -< q, then q -< p 2 ] moreover, we can, if desired,
require the conclusion of p x to be the same as that of p.

We also set p = pi equivalent to p -< p x A p\ < p and see that = is an
equivalence relation and that its equivalence classes are finite.

We say that the nonmonotonic rule system < U, N > is LOCALLY FINITE if
for every \$€U there are finitely many -<-minimal proof schemes with conclusion <j>.
This concept is motivated by the fact that, for locally finite systems, for every there
is a finite set, Dr^, of derivations such that all the derivations of are inessential
extensions of derivations in Dr^. That is, if p is a derivation of 0, then there is a
derivation p x e Dr^ such that p x -< p. Finally, we say that the system < U, N > is
HIGHLY RECURSIVE if it is recursive, locally finite, and the map — ► can{Dr^)
is partial recursive. That is, there exists an effective procedure which, given any

14

4> 6 U, produces a canonical index of the set of all -<-minimal proof schemes with
conclusion (j>. We let £(< U, N >) denote the set of extensions of the system.

For a nonmonotonic rule system < U, N >, define a rule r to be RESTRAINT-
FREE if its set of restraints is empty. Define a sentence <\> G U to be TERMINALLY
RESTRAINT-FREE if the last rule in each of its minimal proof schemes is restraint
free. Define a subset A of U to be terminally restraint-free if every sentence <j) G A is
terminally restraint free.

Definition 1.3.6 For a nonmonotonic rule system < U,N >, define a sentence
4> G U to be PREMISE-FREE if each of its minimal proof schemes has length one.
Define a subset A of U to be premise-free if every sentence (j) G A is premise-free.

Definition 1.3.7 For a nonmonotonic rule system < U,N >, define a rule r to
be NORMAL if the set of restraints consists only of the negation of the conclusion.
Define a sentence (j> G U to be TERMINALLY NORMAL if the last rule in each of
its minimal proof schemes is normal. Define a subset A of U to be terminally normal
if every sentence (f> G A is terminally normal.

To consider the relationship between sets of extensions and trees, we will need
the following background, as given by Cenzer and Remmel [CR99].

Let u = {0,1,2,...} denote the set of natural numbers and let <,>: u x
w — > u be some fixed one-to-one and onto recursive pairing function such that the
projection functions 7Ti and n 2 defined by 7T!(< x,y >) = x and 7r 2 (< x,y >) = y
are also recursive. We extend our pairing function to code n-tuples for n > 2 by the
usual inductive definition, that is < Xi,...,x„ >=< x x , < x 2 ,...,x n » for n i 2. We
let a; <tJ denote the set of all finite sequences from u and 2 <UJ denote the set of all
finite sequences of 0's and l's. Given a =< a>i,...,a n > and =< (5 U ... , (3 k > in
uj <uj we write a Q (5 if o: is an initial segment of /?. Now we can identify a sequence

15

a =< a lr .., a n > with its code c(a) =< n, < tti,..., a n >> in u>. We let be the code
of the empty sequence 0. Thus, when we say a set 5 C u> <u) is recursive or recursively
enumerable, we mean the set {c(oj)|o; G S} is recursive or recursively enumerable. A
TREE T is a nonempty subset of cj <w such that T is closed under initial segments. A
function / : u — ► w is an infinite PATH through T if for all n, < /(0),..., f(n) >G T.
We let P(T) denote the set of all paths through T. A set A of functions is a II ! class
if there is a recursive predicate R such that A — {/ G w w |(Vn), R(< /(0),...,/(n) >)}.
A n°i class is RECURSIVELY BOUNDED if there is a recursive function g : u — ► u
such that V/ G A, Vn, /(n) < g(n). A is a U°i class, if an only if A = P(T) for some
recursive tree T. We say that a tree T is HIGHLY RECURSIVE if T is a recursive,
finitely branching tree such that there is a recursive procedure which, given a =< a\,
...,a n > in T produces a canonical index of the set of immediate sucessors of a in
T, that is, produces a canonical index of {(3 =< <*i,..., a n , k > \fi GT}. Here we say
the canonical index, can(X) of the finite set X = {xi <...< x n ) C u is 2 Xl + ...+2 Xn
and the canonical index of the empty set is 0. We let D k denote the finite set whose
canonical index is k, that is can(D k ) = k. It is then the case that if A is a recursively
bounded n°! class, then A - P(T) for some highly recursive tree T. We note that
if T is a binary tree, then the set of paths through T is a collection of {0, 1} -valued
functions and by identifying each function / G P(T) with the set A f = {x\f(x) = 1}
of which / is the characteristic function, we can think of the set of paths through T
as a n°! class of sets.

We then have the following theorem:

Theorem 1.3.8 (MNR92a) Given a highly recursive nonmonotonic rule system
< U,N >, there is a highly recursive binary tree T such that there is an effective one-
to-one degree-preserving correspondence between the set of extensions of the system
and the set P(T) of infinite paths through T.

16

A bit more is required for the reverse implication, but it may be shown in the
following sense:

Theorem 1.3.9 (MNR92a) For any recursive binary tree, there is a highly re-
cursive nonmonotonic rule system such that there is an effective one-to-one degree-
preserving correspondence between the set of extensions of the system and the set
P{T) of infinite paths through T.

The significance of these results is that we can apply recursive IT ! classes to
obtain numerous corollaries [MNR92a, CR98].

Corollary 1.3.10 (CR99) Let S = (U,N) be a highly recursive nonmonotonic rule

system such that £(S) ^ 0. Then

(i) If S has only finitely many extensions, then every extension E of S is recursive.

A new representation result is given in chapter four for locally determined
nonmonotonic rule systems.

1.4 Common Derivations and Locally Determined Systems

In the second chapter we consider the finite case in which a nonmonotonic rule
system has a finite language and/or finitely many extensions. Within this chapter
we consider various forms of the common derivation of a sentence appearing in all or
some of the extensions of a nonmonotonic rule system. Certain forms of the common
derivation are explored not only in the classical sense but also using a constructive
view.

The infinite case is explored in the third chapter in which we have an infinite
number of extensions and/or an infinite language. Within this chapter, as in the
second, we consider various forms of the common derivation for a sentence appearing
in all or some of the extensions of a nonmonotonic rule system. Each of these is

17

explored in the classical sense as well as with a constructive view. We consider the
problem of characterizing the set of extensions of a locally finite nonmonotonic rule
system.

Locally determined nonmonotonic rule systems are introduced in the fourth
chapter. When considering nonmonotonic rule systems with an infinite language, a
problem arises in being able to determine at what stage we are sure to either have
concluded <j> or (in the case of restraints) not have concluded 4> for some 4> E U.
To solve this problem, the notion of "levels" was devised. Intuitively, we require
that for each i (as we take U to be enumerated {fa, fa, •••}), there is a fixed n x such
that the inclusion or exclusion of fa is determined by the n^-th stage. Hence the
term "locally determined" is used. This concept allows us to extend results from the
infinite case. The set of extensions of a locally determined nonmonotonic rule system
is characterized in general and in the effective setting.

In the fifth chapter, we consider the complexity of the sets of extensions and
explore the comparative complexity of the common derivations themselves. Within
these systems we explore both the finite and infinite settings, with the classical and
constructive views.

Alternate formalisms of nonmonotonic logic, specifically Default Logic and
Logic Programming, are discussed in the sixth chapter. We show the equivalence of
these formalisms to nonmonotonic rule systems and view some of the results of the
previous chapters in terms of these.

Lastly, in the seventh chapter we consider possible applications of the results
and future directions of the research.

CHAPTER 2
THE FINITE CASE

In this chapter we consider only those nonmonotonic rule systems which have
at least one but only finitely many extensions and/or a finite language. We consider
nonmonotonic rule systems in the classical view and also in the constructive view.

The underlying logic for a nonmonotonic rule system U =<U,N > can be any
form of logic desired. We will often take U to be the set of sentences of some propo-
sitional language. The usual monotonic rules of propositional logic are implicitly
included in each nonmonotonic rule system unless otherwise stated. We will choose
only one representative for each sentence in some systematic way so that there are
not infinitely many representations for each sentence in U. For example, -« V ~^b is
identified with -i(a A b).

2.1 Finite Classical Nonmonotonic Rule Systems

We consider a single nonmonotonic rule system U that has a finite number
of extensions and/or a finite language. We intend to explore the following question:
If <j) is a formula of the language L, such that <j> 6 S for every extension S of U,
then what, if anything, can be said about the theory and/or the formula and its
derivation in each extension. We would like to find a single rule that generates in
every extension. It would be most useful if the rule could tell us, through premises
and restraints, exactly what is required to derive 4> in the system. This rule need not
be in the set N of nonmonotonic rules, but it would be desirable to retain the same
set of extensions upon adding this rule into N.

18

19

We might first consider the simplest possible form of the common derivation,
namely taking <j> as an axiom : /(f). This will certainly generate <j> in every extension,
but we may gain new extensions when the rule is added to the set N of rules.

Example 2.1.7 Let U =< U,N > be the nonmonotonic rule system where U =
{a, b, c} and

N = {: {a, c}/b; : {b, c}/a; aVb: {a, b}/c}.

This system has two extensions Si = {a}, and S 2 = {b}. The sentence a V b appears
in each extension. The axiomatic common derivation for aV b will be : /0V6. Adding
this rule to the set N of rules will produce a third extension S 3 = {a V b, c}.

This formulation does not accomplish the purpose of the common derivation
in that it tells us nothing about what premises and restraints are involved in deriving
<j> in the various extensions of the original system. Thus, we consider other forms.

Definition 2.1.11 Let U =< U,N > be a finite nonmonotonic rule system. Then
U has finitely many extensions, S\, ..., S m . Suppose that G U appears in every Si.
Define the support-based common derivation d^ for (j) in U by

^=:{AA...A/? m |A£S t }/0.

Example 2.1.8 Letting U be the nonmonotonic rule system where U — {a,b} and

N = {: ^b/b; : &/-*; : /a}

we will have two extensions, Si = {a, b} and S 2 = {a, ->&}. The intersection of these
is {a} . Thus there are eight sentences not in Si and eight sentences not in S 2 ■ Hence
there are 64 sentences of the form fa A fa, where Pi £ Si and (3 2 \$ S 2 . Some of

20

these will be trivially false, such as ->b A b. Some others will contain that which is
in neither extension such as -ia A ->a which will collapse to just ->a. The rest will
be restraints of a"*'. We see how the support-based common derivation becomes very
large very quickly. This will be a disadvantage.

We nevertheless get the following result:

Theorem 2.1.12 Let U =< U,N > be a finite nonmonotonic rule system. ThenU
has finitely many extensions. Suppose a sentence 4> £ U appears in every extension
of the system. Let d^ be the common derivation of <f> as defined in Definition 2.1.11.
Then, d^ applies in every extension of the system U and

£(U) = £(<U,Nu{d*} >).

Proof: Let U =< U, N > be a nonmonotonic rule system with finitely many
extensions Si , ..., S m . Suppose a sentence <f> e U appears in every extension of
the system. Let rf* be the common derivation of (j) as defined in Definition 2.1.11.
Consider an arbitrary extension Sj oiU. Take any conjunction &A...A/8U such that
(3{ \$ S t . Then fy <£ Sj so that the conjunction (3 x r\.../\f5 m is not in Sj and thus none
of the restraints of d* is in Sj. Thus d? applies in Sj and therefore it applies in every
extension of the system U. We now have that

£(U)CS{<U,Nu{dfi} >).

To show that we retain the same set of extensions, suppose that S is an extension
of < U, N U {d*} > that is not an extension of U. Then S is not S t for any
i e {l,2,...,m}. Since S is not an extension of U the new rule must apply in S .
The set of extensions of U is noninclusive so that there exist ip x , ..., tjj m e S such
that tpi £ Si. Since we have implicitly included the rules of propositional logic, we
have that xpiA...AijJ m eS . By the definition of d 4 ", this conjunction is a restraint of

21

d?. Thus the rule d? does not apply in So contradicting that So is a new extension.
Thus,

€{U) = £{<U,NU {&}>).

Similarly, we might construct a rule consisting only of premises and a conclu-
sion. The advantage of this form of the common derivation is that it reveals, at least
in part, what is involved in deriving <fi in the extensions. The disadvantage is that
we must first know the extensions of the system before we can construct the rule.
Finding this rule takes considerably more time than would a rule which relied only
on the system U without having to find the extensions. Also, this rule would not
be possible in the infinite case. Thus we seek a form of the common derivation that
would be based on the rules of the system itself, not the extensions, and would also
be possible in the infinite case. We find the following form. For a sequence of sets
A\, . . . , A m of sets of formulas, let f\ t A x denote the set of conjunctions a x A . . . A a m
where each a t e A x . Let A{A t denote the usual conjunction where the formulas of
Ai are themselves conjuncted to form formulas. The notations \J \ Ai and VjA; are
similarly defined.

Definition 2.1.13 Letting U be a nonmonotonic rule system with a finite language
L, we will have a finite number of extensions. List the extensions as {Si, . . . , S m }.
Let pi be the proof scheme deriving (j) in the extension Sj and consider the last rule,
fj in each Pi. Define the standard common derivation to be d/ be the nonmonotonic
rule

i

We will henceforth refer to the nonmonotonic rule system, < U, A r U{d (/) 1 } >, asU + .

22

Theorem 2.1.14 Let U =< U,N > be a nonmonotonic rule system with a finite
language L. Then, for every sentence <fi S (~)£(U), there exists a standard common
derivation d^ x that generates <j) in every extension ofU.

Proof: List the extensions of U as {S\, . . . , S m }. Suppose that <fi is a sentence
oft/ that appears in every extension of the system. Since <j> appears in every extension
of the system, there must be a proof scheme pi for each S{ deriving <f> in that extension.
List the proof schemes for 4> as {pi, ..., p m }. Consider the last rule, r { = P(r t ) : fa 1 ,
..., /3 tt l /<f) of each proof scheme Pi. Let d^ 1 be the nonmonotonic rule

V i P(r t ):/\R(r i )/+.

i

Let Si be any extension of U. We only have to show that d/ applies in \$. Since Si
is an extension of U, we have that the last rule r% of the proof scheme Pi applies in
St. Thus, P( ri ) 6 St and ft', ..., fa* \$ Si. Therefore,

P(n) = y l P{r l ) e s t .

For any 8 x /\...t\5 m € Ai^(n), <5» = P\ for some t, so S t £ S ( and by the standard
propositional logic, d/ applies in Si and concludes c(d/) = <j>.

The common derivation of any formula <f> in U may not be found in N as
illustrated by the nonmonotonic rule system of Example 1.3.3. Recall that

N = {:{a}/b,:{c}/b,:{a}/c,:{c}/a},

and that U has the two extensions S\ = {b, c) and S 2 = {b, a}. Also, as before,

GD(N,S 1 ) = {:{a}/b,:{a}/c}

and

GD(N,S 2 ) = {:{c}/b,:{c}/a}.

23

There is no rule of N that derives b in both extensions. However, we may consider
the common derivation of b in U which is

d\ =: {(a A c)}/b V 6 =: {(a A c)}/b.

Then, rf 1 ^ will generate b in each of these extensions. To see this, note that d x b has
no premises, and a A c £ S since in S\ we have that a £ S\ since : {a)/b applies in
S\. In St, c £ S 2 since : {a}/b applies.

Let U —< U, N > be an nonmonotonic rule system with a sentence 4> occuring
in every extension of the system. Let d/ be the common derivation as defined in
Definition 2.1.13. We find that the sets of extensions of the original system U and
the set of extensions of the new system U+ will, under sufficient conditions, be the
same. However, U + may have an extra extension, while still having as extensions all
the extensions of U.

Theorem 2.1.15 Let U =< U,N > be a nonmonotonic rule system with a finite
language L such that a sentence <j) e U appears in every extension of the system. Let
d^ be the common derivation of (j) as in Definition 2.1.13. Then,

E{U) C £{U + ).

Proof: Let U =< U, N > be a nonmonotonic rule system with a finite lan-
guage L and suppose that a sentence <j> € U appears in every extension of the system.
Let dj, be the standard common derivation of (j) as in Definition 2.1.13. Let S be any
extension of U. Then, to show that S is an extension of U + , we need to show that
for any u G U, u G S if and only if there is a proof scheme p for u in the new system
such that all premises in p are in S and all restraints in p are outside of S. Suppose
first that u e S. Since 5 is an extension of the system U, there is a proof scheme p
for u such that all premises in p are in S and all restraints in p are outside of 5.

24

It is clear that p is still a proof scheme for u in the new system U + . Next suppose
that p is an 5-applicable proof scheme in U + . It suffices to consider the last rule r of
the proof scheme p where the premises of r are in S, the restraints of r are not in S,
and the conclusion of r is u. If r is the common derivation then u = <f> so the <fi e S
by the hypothesis. Thus, S is an extension of U + . Hence,

£(U)C£(<U,NU{d^ 1 } >).

We can generalize this result in the following way.

Theorem 2.1.16 Let U =< U,N > be a nonmonotonic rule system with a finite
language L such that a sentence 4> G U appears in every extension of the system. Let
r^ be any nonmonotonic rule concluding <f> such that r^ applies in every extension of
U. Then,

£(U)C£{<U,Nu{ r<j> }>).

Proof: The proof of this theorem mimics that of the previous theorem. The
key part of the proof showing that an extension S of U is closed under the expanded
system U + follows as above since the conclusion 4> of the new rule r^ is assumed to
be in every extension of U.

We see that the sets of extensions may or may not be equal in that the new
system U + may or may not have one or more extra extensions. Recall Example 1.3.3.

Consider the nonmonotonic rule system U where U = {a,b, c}, N = {: a/b, :
c/b, : a/c, : c/a}, and U has the two extensions S\ = {b, c} and 52 = {b, a}, and the
common derivation of b in U is d x h =:oA c/b. Adding this new rule to N, we see
that < U, N U {d l b } > has the same extensions 5 X and 5 2 , since applying d x b does

25

not prevent the application of any rule in N or allow the application of any rules in
the system that were not previously applied.

To be precise, suppose there was a new extension S different from S\ and
52- Then, the new rule d 1 ^ must be generating for S. This implies that b € S and
(a Ac) £ S. It follows that either a £ S or c <£ S. Thus, one of the original rules,
either : {a}/b or : {c}/b, will generate for S and either : {a}/c or : {c}/a will also
generate for 5. Then, either c e S or a e S, but this means that either Si C S or
S 2 C S. By the noninclusive property, either S = Si or S = S 2 .

Example 2.1.9 Now consider the nonmonotonic rule system U where

U = {Pi,P2,Ci,c 2 ,a}

and

N = {pi\ /ci;p2 : /w- a,pi/p2',: a,p2/pi;: pufr/pi Vp 2 ;ci V c 2 : c u c 2 /a}.
This theory has two extensions,

Si = {P2,C 2 },

and

S 2 = {pi,ci}.
The formula c\ V c 2 is in each extension and we find that

r\=Pi'. /c 2 ,

and

so that

r 2 =Pi- /ci,

^civcz =Pi Vp 2 : /ci Vc 2 .

26

Then, S x and S 2 are both extensions of < U, NU {d} ClVC2 } >, but this new nonmono-
tonic rule system will have a third extension

S = (pj Vp 2 ,ciVc 2l o},

which is not an extension of the original system U.

In the previous example, 5 3 is the only new extension obtained by adding the
common derivation to the set of rules. To see this, let S be any new extension. The
common derivation must generate for S, else S is an extension of the original system.
Thus, we have that p x Vp 2 E S so that the conclusion c x \Zc 2 is in 5 as well. If p Y e S,
then c x E S by the rule p x : fc\ so that S\ C S, a contradiction to the noninclusive
property of extensions, since S is not an extension of the original system. Similarly,
if p 2 E S, then c 2 E S so that S 2 C S, a contradiction. Thus, neither of p x or p 2 is
in S. Also, if a is not in S, then the rule : {a,p 2 }/p x applies to conclude p x in S.
Thus, a E S. From this we now have that 5 3 C S so that 5 3 = S since extensions
are noninclusive. Thus, there is only the one new extension.

Since the standard common derivation is based on the rules in the proof scheme
of (j) instead of the extensions of the system, it might be possible in an infinite system.
The good news is that the set of extensions of the new system contains the set of
extensions of the original system. The bad news is that these sets of extensions may
not be equal. We would prefer that they be equal.

We see that this form of the common derivation does not reach the goals that
we were after, although interesting. We strive further towards the goal of equal sets
of extensions in the next chapter.

If we have a formula appearing in some but not all the extensions of U, we
may in the same fashion as Definition 2.1.13 create a common derivation for that
generates <j> in each extension of U in which it appears. However, we find that in this
case Theorem 2.1.15 becomes false, as the next example illustrates.

27

Example 2.1.10 Let U =< U,N > be the nonmonotonic rule system where

N = {: {ai,a 2 }/a 3 ,: {a 2 ,a 3 }/a x ,: {a 1 ,a 3 }/a 2 ,

Pi : { _, ai}/ci,P2 : {^a 2 }/c 2 ,a 3 : jp x Vp 2 ,a x : M,a 2 : /p 2 }.
77iis theory has as three of its extensions,

Si = Cn({a 3 ,pi Vp 2 }),

5 2 = Cn({a 2 ,p 2 ,c 2 }),

and

5 3 = Cn({a x ,p x ,c x }).

We see £/ia£ i/ie formula c x V C2 is in both S 2 and S3, but is not in S\. We find the
standard common derivation for c x V c 2 to be

d\ x \, C2 =P\ Vp 2 : {-1(01 V a 2 )}/c l V c 2 .

Adding this new rule to the set of rules N , we find that both S 2 and S3 are extensions
of the new nonmonotonic rule system < U, NU {d} Cl \/ C2 } >. Si, however, is not an
extension of < U, N U {d 1 Cl vc 2 } > since it it no longer closed under the set of rules.
It is important to note that S\ is a subset of {a 3 ,p x Vp 2 , C\ Vc 2 } which is an extension
of<U,NU{d l ClVCi } >.

Theorem 2.1.17 Let U =< U,N > be a nonmonotonic rule system with a finite
language L such that 4> appears in some but not all of the extensions and each gener-
ating rule for each extension S is a generating rule for SL){(f>}. Consider d}^ to be the
common derivation of(f> as defined in Definition 2.1.13. Then, for any extension S of
U, such that <j> € 5 there is an extension S + ofU + such that S = S + . Furthermore,
any extension oflA + which is not an extension ofll must contain <j>.

28

Proof: Suppose that U is a nonmonotonic rule system with a finite language
L such that (f> appears in at least one, but not all of the extensions. Consider d 1 ^ to
be the standard common derivation of <f> as defined in Theorem 2.1.13. If 4> appears
in only one of the extensions of U, then the common derivation d 1 ^ is found in N
so that < U,N >= U + , and the theorem is trivially true. Thus, suppose that \$
appears in at least two extensions of the nonmonotonic rule system U, but does not
appear in every extension. Let \$ be an extension of U. If \$ is an extension of the
new system U + , we are done. Also, if is in \$, then \$ is clearly an extension of
< U, N U {d 1 ^} > as in the proof of Theorem 2.1.15. Hence, consider the case in
which S is an extension of the new system U + , but is not an extension of the old
system U. If d 1 ^ does not generate for S, then

S = GD{N U {d\}, S) = GD(N, S)

so that S is an extension of the system U, a contradiction. Thus, d 1 ^ generates for
Si so that

<P = c(d\) e c(GD(N U {d\}, S)) = S.

The added conditions of this theorem that require each generating rule for
each extension S to be a generating rule for S U {\$} are necessary for the following
reason. If we consider the previous example, and add the rule a 3 : /-.(ci V c 2 ) to the
set ./V of rules, S\ becomes

S\ = {a 3 ,pi Vpa,-i(ei Vc 2 )}.
Then, ci V c 2 cannot be added to Si.

Now we explore the question of what happens if we take the process further;
finding the intersection of the extensions, creating a common derivation, and then
adding that rule to the nonmonotonic rule system, over and over.

29

Suppose we let Ii be the intersection of all the extensions of the original
nonmonotonic rule system U. Then, in Example 2.1.9

h =Cn({(p 2 Ac 2 )V(p 1 Ac 1 )}).

We consider the new rule system

U 2 =< U,N 3 >=< U,Nu{d\ x v C2 } >
and the set I 2 of all conclusions common to every extension of U 2 . We find that

h = Cn({( Pl Vp 2 ) A (c, Vc 2 )}) C h.

Moreover, we could choose a conclusion common to all the extensions of U 2 ,
such as pi Vp 2 . This has the common derivation

rf 2 Pl v P2 =: {oVpi,oVp2,o,pi Vp 2 }M Vp 2 -
In any new extension S of

U3=<U,N 2 U{d 2 PlVp2 } >

that was not an extension of the theory U 2 , this common derivation must apply. Sim-
ilarly to the previous argument, none of pi,p 2 , ci or c 2 can be in S else we contradict
the noninclusive property of extensions. p x V p 2 must be in S since the common
derivation applies. Thus, by the common derivation for c x Vc 2 , we conclude c x V c 2 in
S and thus a is in S by the appropriate rule from N. This is a contradiction since the
common derivation requires that a not be in S. Thus, there are no new extensions,
that is, U 2 has the same set of extensions as U3.

On a different note, we may force the intersections I x and I 2 to be equal by
taking <f>tobef\Ii. This appears in every extension of U. Letting

Ni = Nu{d l +},

30

we find that if S is any new extension, then € S since the common derivation must
apply (else S is not new). We then get the following theorem.

Theorem 2.1.18 Let <f> be the conjunction of the intersection I x of all extensions of
U. Then the intersection I 2 of all extensions ofU + equals Iy.

Proof: Clearly we have that I 2 C I x . Now suppose that ip € /j and let S
be any extension of U + . If S is an extension of U then tp G S since xjj e I\. Thus,
suppose that 5 is an extension of U + that is not an extension of the original system
U. This new extension must contain </>. Since <fi is the conjunction of the intersection
of extensions I\, we have that <j) — > ip. Thus the extension S must contain i/j. Thus,
1% = h-

We may consider any nonmonotonic rule system U, and the set

h = f]e{U).

For every <fi in this set, we can find a common derivation d x \$ which generates in
each extension S of U. Consider the nonmonotonic rule system U 2 defined by

<U,Nu{d l t\<fieh}>.

We may then repeat the procedure to find

I 2 = f)£(U 2 ),

and if this set is not empty, then we can find common derivations d 2 ^ for any <j> in
I 2 such that d 2 ^ generates <j> in every extension S of U 2 . Then, we may consider the
nonmonotonic rule system Ui defined by

<U,N 2 U{d\\(t>^I 2 }> .

We continue this construction by, assuming that U n is defined, let

I n = S{U n ).

31

Then, letting d% be the common derivation for <f> £ I n in U n , we define U n+ y to be

<^iV n U{(T^ €/„}>.

This may be continued until I k is empty for some k and/or N k = Ni for all I > k
for some k. This inductive definition begs the following questions. Is there a limit to
this process, i.e., is there always some k for which I k is empty and/or JV fc = N t for
all / > A;? Does it make a difference if we work in the case where N is finite versus
countably or uncountably infinite? These questions should be investigated.

Example 2.1.11 Consider the nonmonotonic rule system of Example 1.3.3. Using
this nonmonotonic rule system, we would find N 2 to be

N U {d\} = {: {a}/b, : {c}/b, : {a}/c, : {c}/a, : {(a A c)}/b).

U2 has the same extensions as U so that I 2 = h = {b} and d 2 b = d l b- Then N k = N 2
for all k > 1 .

Remark 2.1.19 By Theorem 2.1.15, we have that E(U) C S(U 2 ) C ... C £{U n ) C
£(Un+i) C ....

For a finite language, or just a finite 7 1; we must eventually have 7 n+1 = /„
for some n. At this point, we find that we will have a common derivation in N n for
each <j> G /„, when we consider N n to be defined using all of the common derivations
for conclusions in /„_!. U n will be said to HAVE common derivations.

Theorem 2.1.20 For any nonmonotonic rule system U with finite I\ or with finite
N, there exists an n such that U n has common derivations, and li n is equivalent to
the nonmonotonic rule system U n+ k for any k.

32

For an alternate approach, we might consider the nonmonotonic rule system
U and the set I\ as before. However, we may choose one element cf> from I x and define
< U,N 2 > to be the nonmonotonic rule system

U +

as seen for Example 6.3.26. We may then consider the set of all formulas which
appear in every extension of < U,N 2 >, choose some V among those, and define
<U,N 3 > to be

<u,N 2 u{d\}> .

We may continue in this way with the same result as in Theorem 2.1.20.

2.2 Finite Constructive Nonmonotonic Rule Systems

Again, consider a single nonmonotonic rule system U with a finite set of ex-
tensions and/or a finite language. In this section, we consider the same question
about nonmonotonic rule systems as in the previous section, but use a constructive
approach. That is, we consider that pVq is not derived unless one of p or q is derived.
Note that we will no longer implicitly assume the propositional axiom pV->p. Had we
chosen to keep this axiom, while considering a constructive view, all the extensions of
every system would be complete, and consequently uninteresting. We will, however,
keep the propositional rules for conjunctions as we did in the classical case.

Again, we consider 0, a formula of the language L such that <j> e (~)£{U).
Then ^ G 5 for every extension S of U. Since each extension S is of the form
c(GD(N, S)), there must be, as before, a proof scheme pi deriving for each extension
St. The advantage of the constructive approach is that it allows us to view each set
of restraints as a single formula instead of as a set of formulas. Then, instead of
asking if each element of the set of restraints is not in a context, we would ask if the
restraint itself is not in the context. As it stands, the last rule r t in each proof scheme

33

Pi has as its restraints the set {(3\, ...,#**} for some /?v formulas of the language. We
may instead let R(n) = /3\V ... Vf3\. Then flfa) is in the extension S t if and only
if |9j e Si for some j, a contradiction since r, generates for Sf. We find that several
of the theorems of the last section still hold.

Theorem 2.2.21 For any formula <f> of the finite language L such that (j) 6 f\£{U),
4> has a common derivation d 1 ^ which generates (/> in each extension S ofU where U
is a constructive nonmonotonic rule system.

Proof : Suppose U is a constructive nonmonotonic rule system with a finite
language L. Suppose further that <f> is a sentence of U that appears in every extension
of the system. List the extensions of U as Si, . . . , S m . Then for each extension 5 8 of
U there is a proof scheme p t deriving in S t . Consider the last rule, r t = P(n) : ft*,
..., 0ti % /4 of each proof scheme p { . Let d/ be the nonmonotonic rule

^P^-.^RirJ/cf).

i

Let Si be any extension of U. We only have left to show that d/ applies in St. Since
Si is an extension of U, we have that there is some proof scheme p { that derives in
S^ Since this proof scheme derives in \$, each rule in the proof scheme applies in
S t so that r t applies in Si. Thus, P(r t ) e S z and ft*", ..., ft.' <£ S { . Thus,

P(n) = ViP(n) e s t

and any ^A...A(5 m 6 A* Rfc) will not be in S t . Thus, d/ applies in 5, and concludes
c(d/) = <^>. Thus, for any formula 4> of the language such that G f|£(^), </> has a
common derivation d 1 ^ which generates <\> in each extension S of W. Note that d\
may be in TV, but then U + will have the same set of extensions as U.

34

Example 2.2.12 Consider the nonmonotonic rule system of Example 1.3.3. Under
the approach of this section, we let

N = {: a/b, : c/b, : a/c, : c/a}.

Then the nonmonotonic rule system U has the two extensions

Sr = {b )C }

and

S 2 = {b,a}.

Also,

GD{N,S l ) = {:a/b,:a/c}

and

GD(N,S 2 ) = {:c/b,:c/a}.

Just as before, there is no rule of N that derives b in both extensions. However, we
may condsider the common derivation ofbinU by the methods of this section, which
is

d\ -. (oAc)/6.

Then, d l b will generate b in each of these extensions. Compare this version of the
common derivation of b in U to that of the previous section where we had

d\=:{(aAc)}/b.

We see that the two methods produce very similar results, but that the common deriva-
tion produced by the methods of this section is computationally simpler.

By the same arguement as in Theorem 2.1.15, we still have that
S{U)CS{<U,Nu{d\}>)

35

for any e f\S(U). Equivalently, £{U) C £(< U, NU {d^} >), for any € f)£(U).
Also as before, the converse of this theorem is false, as illustrated in example 2.1.9.

We must then ask if there might be a better formulation of the common
derivation that would ensure that same set of extensions when added to the system.
There is, but it is not without strings attached.

Theorem 2.2.22 For any formula \$ of a finite language such that € f)S(U),
define the strong common derivation d*^ to be

V,P(r<) :|J J2(r«)M

i

Then, rf% e GD(N U {d* <t> },C]£(U)).

Proof : Let be a formula of the language such that e f]£{U). Consider
<J% as defined above. Clearly d% £ N U {d*^}. Now, for every extension \$ in £{U)
we have that some proof scheme p t generates in S t so that P{ri) G S t . Thus,
P{d\) = V l P(r l ) e f\£(U). Lastly, we have that /*(<*%) = (J^,-) C f)£(U) if
and only if Rfa) C f|£(W) for all ». However, this is if and only if R(n) C 5 2 for
some extension \$ where Pi generates for 5,-. Then, flfo) 6 5,, a contradiction. So
we have that R{d* (j) ) is not a subset off\£(U). Thus d% generates <f> in f|£(W).

Note that although this new rule generates for the intersection of all the ex-
tensions, it may not generate for each particular extension.

Example 2.2.13 Consider the nonmonotonic rule system U where

N = {:{a}/b,:{b}/a}.
This rule theory has two extensions,

Si = {b},

36

and

S 2 = {a}.
We have that the formula a V b is in both of the extensions and we find that

d* a vb ='■ {a, b}/a V b.

By the theorem, d* aVb generates for the intersection of the two extensions. However,
it does not generate for either of the extensions Si or S 2 since b € S\ and a e S 2 ,
respectively.

We may again inductively define the nonmonotonic rule system U n as we did in
the last section. Investigations into the questions raised by this definition are further
warrented in this setting as the common derivations are computationally simpler and
the constructive view may make the difference in what the answer to those questions
will be.

Example 2.2.14 Consider the nonmonotonic rule system of Example 1.3.3. Using
this rule system, and the methods of this section, we would find N 2 to be

N U {d\} = {: a/b, : c/b, : a/c, : c/a, : (a A c)/b).

Again, U 2 has the same extensions as U so that I 2 = h = {b} and d 2 b = d\. Then
N k = N 2 for allk > 1.

2.3 Characterizing the Set of Extensions

Now, since the intersections of families of extensions are of particular interest,
we might ask which families of extensions have intersections which yield a specific
theory. Knowing this, given a theory which we desire to see as the intersection of
extensions we may create the nonmonotonic rule system whose extensions intersect to
yield that theory. Note that we will not implicitly assume any rules of propositional
logic for the case of this construction.

37

Theorem 2.3.23 Given any finite family of sets

F = {Ti,T 2 , . . .,T n },

in the language L there is a nonmonotonic rule system U whose set of extensions
£{U) is exactly F.

Proof: Let

F = {T X ,T 2 , . . . ,T n },

be any finite family of sets. We create the system U in the following way. Let U be
the set of atoms of the language L. Let

N = {:{A j T j \j^i}/AT i \l<i<n}.

Then, U will have F as its set of extensions.

For example, for the family {{a, b, c}, {a, c, d}, {a, b, d}} we would choose the
nonmonotonic rule system < U, N > where

This system will have exactly the three extensions: {a, b, c}, {a, c, d}, and {a, b, d}.

So we see that any given theory may be derived as the intersection of a set of
extensions by determining what family of sets intersects to yield the desired theory,
and then constructing the appropriate nonmonotonic rule system whose extensions
are that family.

The constructive case will be different only in that the number of possible
extensions that are acceptable is greatly reduced. This is explored in detail in terms
of Default Logic in Chapter 6.

CHAPTER 3
THE INFINITE CASE

In this chapter we consider nonmonotonic rule systems with infinitely many
extensions and an infinite language. This necessitates the concept of a locally finite
system as seen in the introduction. We first consider nonmonotonic rule systems in
the classical view and then in the constructive view. Lastly, we consider the task of
characterizing the set of extensions of a locally finite nonmonotonic rule system.

3.1 Locally Finite Classical Nonmonotonic Rule Systems

Considering locally finite nonmonotonic rule systems, we can have results for
the infinite case similar to those of the finite case. We should note, however, that the
support-based common derivation d? as shown in the finite case is no longer possible
in the infinite case since it would result in an infinite conjunction in the restraints of
the rule. Still applicable, however, is the standard common derivation d 1 ^.

Theorem 3.1.24 Let U =< U,N > be a locally finite nonmonotonic rule system.
Then, for every sentence <j> € f\€(U), there exists a standard common derivation d/
that generates 4> in every extension ofU.

Proof: Suppose U is a locally finite nonmonotonic rule system. Suppose
further that \$ is a sentence of U that appears in every extension of the system. Since
the system is locally finite, we have that there are only finitely many minimal proof
schemes for <f>. List the proof schemes for (j) as Dr+ = {p u ..., p m }. Consider the
last rule, r { = P(n) : fa 1 , ..., fa*l<\$> of each proof scheme ft. Now, since <j) appears
in every extension of the system, we have that for each extension S, there is a proof

38

39

scheme that derives 4> in S, call it ps. Since each rule of the proof scheme applies in
S, we know that there must be a minimal proof scheme p i: for some i 6 {1, ..., m},
contained in p s that derives <j> in S. Let d/ be the nonmonotonic rule

i

Let S be any extension of U. We only have left to show that d^ 1 applies in 5. Since
S is an extension of U, we have that there is some minimal proof scheme p l e Dr^
that derives (j> in S. Since this proof scheme derives </> in S, each rule in the proof
scheme applies in S so that r { applies in S. Thus, -P(rj) € 5 and /V, ■••, /V ^ 5.
Thus,

P(r>) = s/iPin) e s

and any 5iA...A5 m € Ai R( r i) w ^ not De in S. Thus, d\$ applies in S and concludes

cid^) = \$.

We must ask several questions concerning how this new rule effects the system
U. First, if the system U is locally finite, then what happens when we add in the
new rule to consider the system U + . Is it still locally finite? In terms of the sets of
extensions, will we find that

£{U) = E{U + )1

We find that the expanded system may no longer be locally finite, as seen in
the following example.

Example 3.1.15 Let U —< U,N > be the nonmonotonic rule system where

U = {ai, a 2 , b, d, e] U {ci\i € lj}

and

N = {ai : /b;a 2 : /&; : a u d/a 2 ;: a 2 ,d/ai;: ai,a 2 /ai Vo 2 ;ai V a 2 ,b: ai,a 2 /d}

40

U{ai V a 2 , b : Oi, a 2 /cj|i G a;}

U{cj : Oi,a 2 /e|2 G a;}.

This system has two extensions, Si = {ai,b}, and S 2 = {a 2 ,6}. The atom b appears
in each and has the standard common derivation d 1 ^ = a\ V a 2 : jb. Upon adding this
rule to the set N, we find that the new system provides an infinite number of minimal
proof schemes for e. Thus the new system is not locally finite.

The problem arises that there may be infinitely many distinct rules concluding
a sentence ip G U that are not applied in any extensions and therefore do not enter
into the set of minimal proof schemes for tp. However, the addition of the common
derivation to the set of nonmonotonic rules may allow these rules to be applicable
and thereby the set of minimal proof schemes to be infinite. Then the new system
is not locally finite. To mend this problem, we must rid the system of rules that are
not applied in any extension prior to the addition of the common derivation.

Definition 3.1.25 Call a nonmnotonic rule r G N ACTIVE if it applies in some
extension of the rule system U; call the rule DORMANT, if it does not apply in any
extension of the system. Call a nonmonotonic rule system U DORMANT if every
rule in N is dormant. Call the system PARTIALLY DORMANT if N has at least
one dormant rule and at least one active rule. Finally, call the system ACTIVE if
every rule in N is active.

[Note that if a system is dormant, then it has no extensions.]

Theorem 3.1.26 Let U =< U,N > be a locally finite active nonomonotonic rule
system such that </> appears in every extension of the system. If d/ is the standard
common derivation for <f> in U, then U + is locally finite and active.

Proof: Suppose U is a locally finite active nonomonotonic rule system such
that <f> appears in every extension of the system. Let d/ be the standard common

41

derivation for </> in U. Consider the nonmonotonic rule system, U + created by adding
the common derivation to the set of nonmonotonic rules:

Given that the original system is active, we have that every rule in N applies in some
extension of U. Since the system < U, N U {d/} > will have the same extensions
as the original system, and any extra extensions obtained are such that the common
derivation applies, we have that the new system is active. Further, U is locally finite
so that the set of minimal proof schemes for any sentence ip 6 U is finite. These sets
of minimal proof schemes will remain finite unless applying the common derivation
allows the application of an infinite list of rules that conclude some ip 6 U. Suppose
this is true. This infinite list of rules must be from N, as the common derivation is the
only rule added to obtain the new system. Thus, these rules are active so that each
rule is applied in some extension of U. Thus, each rule provides a proof scheme for
if) in U. Since each rule is distinct and concludes ip, we have that none of these proof
schemes is contained in any other so that there are infinitely many minimal proof
schemes for ijj and U is not locally finite, a contradiction. Thus, the new system, U +
is locally finite.

Let U =< U, N > be a nonmonotonic rule system with a sentence 4> occuring
in every extension of the system. Let d/ be the standard common derivation as
defined in Theorem 3.1.24. We find that the sets of extensions of the original system
U and the set of extensions of the new system U + will, under sufficient conditions,
be the same. However, U + may have an extra extension, while still retaining all the
extensions of U.

Theorem 3.1.27 Let U =< U,N > be a locally finite nonmonotonic rule system
such that a sentence (j) e U appears in every extension of the system. Let d 1 be the

42
standard common derivation of (j) as defined in Theorem 3.1.24. Then,

£{U) C E{U + ).

Proof: The proof for this theorem is similar to that of Theorem 2.1.15.

We can generalize this result in the following way, as in the finite chapter:

Theorem 3.1.28 Let U =< U,N > be a locally finite nonmonotonic rule system
such that a sentence (j) G U appears in every extension of the system. Let r^ be any
nonmonotonic rule concluding <j> such that r^ applies in every extension ofU. Then,

£(U)C£(<U,Nu{r+}>).

Proof: The proof of this theorem mimics that of the previous theorem.

We see that the sets of extensions may or may not be equal in that the new
system U + may have one or more extra extensions. Consider the following example:

Example 3.1.16 Consider the nonmonotonic rule system U from Example 3.1.15.
This system has the two extensions S\ = {ai,b}, and S 2 = {a 2 ,b}. The atom b
appears in each and has terminal rules

r x = a\ : /b,

and

r 2 = a 2 : /b,

so that the standard common derivation is

d 1 t = Oi V a 2 : /b.

43

Upon adding this rule to the set N, we find that the new system aquires and extra
extension. S\ and S 2 are both extensions of < U, N U {d l b} >, but we will have a
third extension

S 3 = {ai V a 2 , b, d, e) U {q,\i E u},

which is not an extension of the original system U.

In the previous example, S3 is the only new extension obtained by adding the
common derivation to the set of rules. To see this, let 5 be any new extension. The
common derivation must generate for S, else S is an extension of the original system.
Thus, we have that a\ V a 2 € S so that the conclusion b is in S as well. If ai € S,
then b £ S by the rule ai : /b so that Si C 5, a contradiction to the noninclusive
property of extensions, since S is not an extension of the original system. Similarly,
if 02 G 5, then b € S so that 5 2 ^ S, a contradiction. Thus, neither of a^ or a 2 is in
S. Then the rules : ai,a 2 /a\ Va 2 and a^V a 2 : Oi, a 2 /cj for each t apply so that Cj 6 S
for each ». From this we now have that 53 C 5 so that S3 = S since extensions are
noninclusive. Thus, there is only the one new extension.

Further, we might try considering active nonmonotonic rule systems, in hopes
that this would cure the new system of its extra extensions. This does not help, as
seen in the next example.

Example 3.1.17 Consider the nonmonotonic rule system U where

U~ {PI,P2,JUJ2,C}

and

N = {pi : ^'i/c;p 2 : ^j 2 /c; : pi/p 2 ; : p 2 /p\\c : ^ji/^j 2 ; c : ->j 2 /^ii}-

This system can be taken to be active and has two extensions,

Si = {Pi,c,^ 2 },

44
and

S2 = {P2,C,^ji}.
c appears in both extensions and we see that

d X e = Pi V p 2 : -iji A -»J2/c.

Adding this rule to the set N, we find that the new system, < U, N U {d x c } >, has
both S\ and S 2 as extensions but also has two other extensions, S3 and S4 where

53 = {Pi,->ji,c},

and

54 = {p 2 ,-\?2,c}.

At this point, all possible versions of the common derivation that carry over
from the finite classical case have been exhausted. Yet we crave a form of the common
derivation that may have better luck retaining the same set of extensions when added
to the original set N of rules. We turn then to the following more complex rules.
These are constructed with more intricate restraints that are meant to rule out the
causes of extra extensions in U + .

Definition 3.1.29 We define two new versions of the common derivation, d]((p) and
d 2 ((j)) similarly to the definition of d/ in that we consider the last rule in each of the
minimal proof schemes of :

Suppose U is a locally finite nonmonotonic rule system. Suppose further that
is a sentence of U that appears in every extension of the system. Since the system
is locally finite, we have that there are only finitely many minimal proof schemes
for (j). List the proof schemes for <j> as Dr^ = {p u ..., p m }. Consider the last rule,

45

U = P{fi) : /?i\ ..., Pu/<l> of each proof scheme p { . Now, since (f> appears in every
extension of the system, we have that for each extension S, there is a proof scheme
that derives <j> in S, call it p s . Since each rule of the proof scheme applies in S\
we know that there must be a minimal proof scheme p t , for some I G {1, ..., m},
contained in p s , i.e. p { ■< p s , that derives 4> in S.

Define the refined common derivation di(4>) to be the nonmonotonic rule

V^) : /\ R(r t ) U \/(P(rO A [\/ flfc)])/*.

i i

Further, define the strong refined common derivation d 2 (<t>) to be the nonmonotonic
rule

^P(n) : /\ R(n) u /\(P( r< ) a [\/ R(n)])/4>.

Taking each rule in its turn, we find different properties. Since (j) belongs
to each extension of the system, it follows as usual that < U,N U {di ((f))} > and
< U, N U {d 2 {(f))} > each retain the extensions of < U, N >. Considering the rule
d\(4>) we obtain the following results:

Theorem 3.1.30 Let U =< U,N > be a locally finite active nonomonotonic rule
system such that (p appears in every extension of the system. If di((f)) is the refined
common derivation for in U, then < U, N U {d\((j))} > is locally finite and active.

Proof: Suppose U is a locally finite active nonomonotonic rule system such
that (j> appears in every extension of the system. Let d\{4>) be the refined common
derivation for <j> in U. Consider the nonmonotonic rule system created by adding the
refined common derivation to the set of nonmonotonic rules:

<U,Nu{d l {(t>)}> .

46

Given that the original system is active, we have that every rule in N applies in some
extension of U. Since the system < U, N U {di((f))} > will retain the extensions of
the original system, and any extra extensions obtained are such that the common
derivation applies, we have that the new system is active. Further, the system U is
locally finite so that the set of minimal proof schemes for any sentence ip G U is finite.
These sets of minimal proof schemes will remain finite unless applying the common
derivation allows the application of an infinite list of rules that conclude some ip G U.
Suppose this is true. This infinite list of rules must be from N, as the common
derivation is the only rule added to obtain the new system. Thus, these rules are
active so that each rule is applied in some extension of U. Thus, each rule provides
a proof scheme for tp in U. Since each rule is distinct and concludes ip, we have that
none of these proof schemes is contained in any other so that there are infinitely
many minimal proof schemes for ip and li is not locally finite, a contradiction. Thus,
the new system, <U,Nl) {di(4>)} > is locally finite.

However, using this version of the common derivation, we may not have that
di(4>) applies in every extension of the system. The reason for this is that the dis-
junction in the restraints,

Y^fojAlVflfo)]),

cannot be found in any extension of the system. For this to be true, none of the

(P(r<) A (V *(*)])

may appear in any extension of the system. This is too much to ask, as we can only
be certain that for each extension there is at least one rule r* that applies. There
may be one or more rules that do not apply; i.e. there may be a rule r, that does
not apply such that P(tj) is in the extension, but some restraint of r, is also in the
extension. In that case, P(rj) A [V R(ri)]) is in the extension so that the disjunction

V(^)A[\/«(n)])

47

is in the exension and the common derivation d\ (<j>) does not apply.

Considering the strong refined common derivation, d 2 (<f>), we get just the
opposite problem. We find that this version of the common derivation applies in
every extension of the system, but may create new extensions when added to the set
of nonmonotonic rules.

Theorem 3.1.31 Let U =< U,N > be a locally finite nonmonotonic rule system
such that a sentence \$EU appears in every extension of the system. Let d 2 (<f>) be the
strong refined common derivation of <j> as defined in Definition 3.1.29. Then, d 2 (</>)
applies in every extension of the system U and

e(U)C£{<U,Nu{d 2 (<t>)}>).

Proof: Let U =< U, N > be a locally finite nonmonotonic rule system such
that a sentence e U appears in every extension of the system. Let d 2 (<j>) be the
strong refined common derivation of (j) as defined in Definition 3.1.29. Recall that

d 2 (<f>) = VtPin) : /\ R{r % ) U /\(P(r z ) A [\/ R{r t )])/ct>.

i i

We need to show that the premise of the rule is in every extension of the system U
and that no restraint is in any extension of the system. Let S be any extension of U.
First, consider the premise, VjPfa). Since S is an extension and thus <j> is in 5, we
have that there is a minimal proof scheme, p { € Dr^ that applies in S and concludes
(f). r t is the last rule of this proof scheme and thus it must apply in S so that its
premise, P(n) is in 5. Thus, \/ l P(r i ) is in S. Now we consider the set of restraints,

A^)uA(^)A{V^(n)D-

Since r { applies in S we have that no restraint in the set R(ri) is in S. Any a e
A, Rfc) has the form (5 X n A ... A (3 m jm where ^ g R{ ri ) for each i. Since ^. £ 5, it

48

follows that a £ S. Thus, no restraint in the set f\ t R(ri) is in S and P{r t ) A [V R{r l )}
is not in S. Thus, Ai(^( r i) A [V R{n)}) is not in 5 so that no restraints of d 2 {(f>) are
in 5. Thus, d%(4>) applies in every extension of the system U and therefore

e(u)cs(<u,Nu{d 2 {(j>)}>)

by Theorem 3.1.28.

Due to some of the same problems as before, we may still gain extra extensions
when the common derivation is added to the set N of rules. We now must ask if it
is possible to make sure that the sets of extensions will be the same, and if so, what
restrictions will be needed. We find that we can ensure that the sets will be equal,
and there are several options to do so.

Theorem 3.1.32 Let U —< U,N > be a locally finite premise-free nonmonotonic
rule system such that a sentence <j> G U appears in every extension of the system. Let
d 2 {<t>) be the strong refined common derivation of <f> as defined in Definition 3.1.29.
Then,

£{U)=S(<U,Nu{d 2 (d>)}>).

Proof: Suppose that U be a locally finite premise-free nonmonotonic rule
system and let 4> e U be a sentence that appears in every extension of the sys-
tem. Let d 2 (<f>) be the strong refined common derivation of as defined in Theorem
3.1.31. Then d 2 (0) =: A<(V^( r *))M B Y Theorem 3.1.31 we know that the common
derivation applies in every extension of U. Hence, by Theorem 3.1.31, we have that

£(W)C £(<[/, JVu{d 2 (0)}>).

We have only to show the reverse inclusion. For any consequence ip in some extension
S in the new system, there must be an S-applicable proof scheme for ip. If the last
rule in the proof scheme is anything other than d 2 (</>), then it is a rule of the original

49

system and therefore the proof scheme is valid in the original system. Otherwise,
d 2 ((f)) is the terminal rule of the proof scheme for ip and hence ip — 4> and this rule
applies in S so that Ai(V^( r i)) * s not m &• That is, for some *, (V R{ri)) is not in
S so that the premise-free rule r; applies in S. Hence, the proof scheme pi applies in
S. In either case, we see that any proof scheme in the new system is a proof scheme
in the original system. Therefore S is an extension of U, and

£{U) = £(<U,Nu{d 2 (<j ) )}>).

Note that in a premise-free system, all minimal proof schemes have just one
line. However, the requirements on the rule system need not be this strong, to ensure
that the sets of extensions will be the same. There are other options.

We then gain the following less restrictive results:

Theorem 3.1.33 Let U =< U,N > be a locally finite nonmonotonic rule system
such that any sentence in the intersection of all the extensions of the system is
premise-free. Suppose that a sentence (/> € U appears in every extension of the system.
Let dip be the standard common derivation of <j) as defined in Theorem 3.1.24- Then,

S(U)=£(U + ).

Proof: Let U =< U, N > be a locally finite nonmonotonic rule system such
that any sentence in the intersection of all the extensions of the system is premise-
free. Suppose that a sentence <j> E U appears in every extension of the system. Let
d^ be the standard common derivation of <j) as defined in Theorem 3.1.24. Then,
since the last rule in each proof scheme for <j> are the only rules considered in the
construction of the common derivation, we may follow the proof of the case of the
system U itself being locally finite and premise-free.

50

We may instead place the requirements on the sentence in question, i.e. on
the sentence which appears in every extension of the system.

Theorem 3.1.34 Let U =< U,N > be a locally finite nonmonotonic rule system
such that a sentence (j> 6 U appears in every extension of the system and is premise-
free. Let d/ be the standard common derivation of <j) as defined in Theorem 3.1.24.
Then, d^ 1 is premise-free and

£{U)=£{U + ).

Proof: Suppose that U =< U, N > is a locally finite nonmonotonic rule
system such that a sentence 4> € U appears in every extension of the system and is
premise-free. Let d/ be the standard common derivation of <j> as defined in Theorem
3.1.24. For any element a of the restraints of d/, a has the form l n A ... A /? m -
where p i ji E R(n) for each i, and no 1 h A ... A p m Jm is in S, for any extension S
in which the rule applies. We claim that for some i, none of the f3 l H is in S so that
the rule n applies. Supposing not, we may choose for each I some 5 t = f3 l in 5.
However, we then have that delta x A ... A 8 k e 5, a contradiction. Thus, any proof
scheme in the new system may be replaced with a proof scheme in the original system
and hence

£{U) = 8{U + ).

Next, we consider iterating the common derivation as in Chapter 2.

Considering the language to be infinite, and taking the common derivation
to be either the standard, refined or strong refined common derivation, we still have
that L x D l 2 D I 3 D.... We may define I* to be the intersection of all the J n , define
iV* to be the union of all the N n , and define £* to be the union of all the sets of
extensions of the rule systems <U,N n >. We then find the following:

51
Theorem 3.1.35 /• =()€*

Proof: Suppose that <f> e /*. Then, <j> € /„ for all n so that

<j>ef)£(<U,N n >)

for all n. Letting S be any extension in £* we have that 5 is an extension of <U,N n >
for some n so that (j) e S and thus € f]€*. For the reverse containment, suppose
that we have <j)Ef)£*. Then, <j> € S for any extension 5 g £* so that e S for any
extension S of any <U,N n >. Hence, </><=/„ for any n so that <f>ef)I n = I*. Thus
the equality holds.

Theorem 3.1.36 £* C 5(< £/, JV* >).

Proof: Let S € £* so that 5 is an extension of some < U,N n > and an
extension of any <U,N m > for m > n. We want to show that S is an extension of
< U, N* >. To do this, suppose that a rule r e N* generates for S. r e N m for some
m, and hence for some m > n. Since 5 is an extension of < U, N m > we have that
the consequence of r is in S so that S is an extension of < U,N* >.

One might ask at this point if we can show that iV* has common derivations
or if it does not. This is a point to be investigated.

3J2 Locally Finite Constructive Nonmonotonic Rule Systems

In this section, we consider the same question about nonmonotonic rule sys-
tems as in the previous section, but use a constructive approach. That is, we consider
that p Vq is not derived unless one of p or q is derived, and extensions are not closed
under the propositional rule p V ->p, as in the finite constructive case. Say that an
extension S is constructive if p V q e S implies that p e S or q 6 S. Note that if a
set S conatins p V -»p and is constructive, then S is complete.

52

Again, consider 0, a formula of the language L such that 4> £ f\S(U). Then
€ 5 for every extension S of U. As before, there is a finite list of minimal proof
schemes Dr^ = {px, . . . ,p m } that derive <j> in the extensions of U. For each t let r x be
the last (a.k.a. terminal) rule in p { . The advantage of the constructive approach is
the same as in the finite constructive case. It allows us to view each set of restraints
as a single formula instead of as a set of formulas. Then, instead of asking if each
element of the set of restraints is not in a context, we would ask if the restraint itself
is not in the context. As it stands, each rule r { has as its restraints the set {(3\,
...,/?* t .} for some /^ sentences of the language. We may instead let R(r t ) = f3\V ...
V(3\.. Then R(ri) is in an extension S if and only if /^ 6 S for some j. We find that
several of the theorems of the last section still hold.

Theorem 3.2.37 For any formula (j) of the language L such that <p e f]£(U), has
a common derivation d 1 ^ which generates (j) in each extension S of U where U is a
locally finite constructive nonmonotonic rule system.

Proof : Suppose U is a locally finite constructive nonmonotonic rule system.
Suppose further that is a sentence of U that appears in every extension of the
system. Since the system is locally finite, we have that there are only finitely many
minimal proof schemes for cf>. List the proof schemes for <f> as Dr^ = {p u ..., p m }.
Consider the last rule, r, = P(r\) : ft*, ..., &//</> of each proof scheme p z . Now, since
(f) appears in every extension of the system, we have that for each extension S, there
is a proof scheme that derives <f> in S, call it p s . Since each rule of the proof scheme
applies in S, we know that there must be a minimal proof scheme p t , for some i € {1,
..., m}, contained in p s that derives (/> in S. Let d\$ be the nonmonotonic rule

ViP{n) : AiR(n)/<j>.

Let S be any extension of U. We only have left to show that d/ applies in S. Since
S is an extension of U, we have that there is some minimal proof scheme p { 6 Dr

53

that derives (f> in S. Since this proof scheme derives <j) in 5, each rule in the proof
scheme applies in S so that T{ applies in S. Thus, P(ri) G S and /V, ..., fi ti x ^ S.
Thus,

P{rt) = y l P(r l ) G 5
and any 6iA...AS m G AiRfri) will not be in S. Thus, d/ applies in 5 and concludes
c(d+ ) = (j>. Thus, for any formula (j) of the language such that <t> G p|£(^0> has a
common derivation d 1 ^ which generates <j> in each extension S oili. Note that rf 1 ^
may be in N, but then U + will have the same set of extensions as U.

As in the finite case, the classical and constructive locally finite versions of the
standard common derivation d 1 ^ produce very similar rules. The difference is that
the constructive version is computationally simpler. We may also simplify the strong
common derivation.

Theorem 3.2.38 For any formula <fi of the language such that G f)£(U), define
d\ to be ViP(r t ) : V i i?(r i )/0. Then,

d%€GD(JVu{d%},p|£(W)).

Proof : Let (j) be a formula of the language such that <f> G f]£(U). Consider
<f as defined above. Clearly d*^ G N U {d%}. Now, for every extension 5 in £{U)
we have that some minimal proof scheme Pi generates in 5 so that P(7\) G 5.
Thus, P{d* <t> ) = ViP{ri) G C[£{U). Lastly, we have that fl(d%) = \Z t R(ri) G f\£(U)
if and only if P(r,) G f|£(^0 f° r a ^ *■ However, this is if and only if R(ri) is in every
extension. Now let S Q be an extension where the rule r t generates. Then R(r t ) G S ,
a contradiction. So we have that R{d*^) is not in f]£(U). Thus d* \$ generates <f> in

r\£{u).

By the same argument as in Theorem 3.1.28, we still have that £{U) C £(U + )
for any cj) G f]£(U). Also as before, we may gain extra extensions, i.e., the contain-
ment may be strict.

54

Further, we might try considering active nonmonotonic rule systems, in hopes
that this would cure the new system of its extra extensions. This does not help, just
at it did not help in the finite case, and for the same reasons.

However, all is not lost. The mission to attain equal sets of extensions for U
and U + is salvaged by returning to the refined common derivation di (</>). The only
required restrictions are that the system be locally finite and constructive.

Theorem 3.2.39 Let U —< U,N > be a locally finite constructive nonmonotonic
rule system such that a sentence <j> € U appears in every extension of the system. Let
di((f>) be the refined common derivation of as defined in Definition 3.1.29. Then,

£(U) = £(<U,Nl>{d 1 {<p)} >).

Proof: Let U —< U, N > be a locally finite constructive nonmonotonic rule
system such that a sentence <f> € U appears in every extension of the system. Let
di{4>) be the common derivation of <j> as defined in Definition 3.1.29. that S e £{U).
Then S is the set of ^-consequences in U of a subset I of U. To show that S €
£(< U,N U {di(<j))} >) we need to show that S is the set of S-consequences in
< U, N U {di(4)} > of some subset / of U. We let J = I . This is true as previously
since (p is in S and the only new rule concludes (p. Thus, we have only to show the
reverse inclusion. For any consequence ip in some extension S in the new system,
there must be an 5-applicable proof scheme for ip. If the last rule in the proof scheme
is anything other than di(<j>), then it is a rule of the original system and therefore
the proof scheme is valid in the original system. Otherwise, di((f>) is the terminal
rule of the proof scheme for ip and hence ip = <p and this rule applies in S so that
Ai(V- R (n)) is not in S. That is, for some i, (\/ R(n)) is not in S. Thus, no restraint
of di(4) is in S. Considering the premise of the rule, y l P(R l ), and knowing that the
premise is in S since this rule applies, we have that one of the P(R t ) is in S. This

55

is due to the system being constructive. Thus, the rule T{ applies in S. Hence, the
proof scheme pi applies in S. In either case, we see that any proof scheme in the new
system is replaceable by a proof scheme in the original system. Therefore S is an
extension of U, and

£{U)=£(<U,Nu{d 2 (<j>)}>).

Now we begin to question the usefulness of the standard common derivation
d l <p. This version still has its place. Under appropriate restrictions the less com-
plex (as compared to d\ (<j))) standard common derivation will yield the equal set of
extensions sought.

Theorem 3.2.40 Let U =< U,N > be a locally finite restraint-free constructive
nonmonotonic rule system such that a sentence <\> 6 U appears in every extension of
the system. Let d^ 1 be the standard common derivation of 4> as defined in Theorem
3.1.24. Then,

£(U) = £(U + ).

Proof: Suppose that U is a locally finite restraint-free constructive nonmono-
tonic rule system such that a sentence <j> e U appears in every extension of the
system. Let d+ be the common derivation of <j) as defined in Theorem 3.1.24. In this
case, d/ = Vi-Pfa) : I4>- By Theorem 3.1.24 we know that the common derivation
applies in every extension of hi. Hence, by Theorem 3.1.28, we have that

£{U) C £{U + ).

We have only to show the reverse inclusion. For any consequence ip in some extension
S in the new system, there must be an 5-applicable proof scheme for tp. If the last
rule in the proof scheme is anything other than d/, then it is a rule of the original

56

system and therefore the proof scheme is valid in the original system. Otherwise, d\$
is the terminal rule of the proof scheme for tp and hence ip = <f> and this rule applies
in S. Considering the premise of the rule, ViP(i?j), and knowing that the premise is
in 5 since this rule applies, we have that one of the P{Ri) is in S. This is due to the
system being constructive. There are no restraints to worry about. Thus, the rule
rj applies in S. Hence, the proof scheme pi applies in S. In either case, we see that
any proof scheme in the new system is replaceable by a proof scheme in the original
system. Therefore S is an extension of U, and

€{U) = S{U + ).

It is important to note here that being restraint-free automatically makes the
rules monotonic, but that this case is still interesting in the context of what we are
trying to accomplish.

Theorem 3.2.41 Let U =< U,N > be a locally finite constructive normal non-
monotonic rule system such that a sentence (j) 6 U appears in every extension of
the system. Let d\$ be the standard common derivation of (j) as defined in Theorem
3.1.24. Then,

£{U) = £{U + ).

Proof: Suppose that U is a locally finite constructive normal nonmonotonic
rule system such that the sentence 4> € U appears in every extension of the system.
Let dtf, be the common derivation of <j) as defined in Theorem 3.1.24. Then, d^ 1 =
VjP(rj) : ^/(j). By Theorem 3.1.24 we know that the common derivation applies in
every extension of U. Hence, by Theorem 3.1.28, we have that

£{U) C £{U + ).

57

We have only to show the reverse inclusion. For any consequence ip in some extension
S in the new system, there must be an S-applicable proof scheme for ip. If the last
rule in the proof scheme is anything other than d/, then it is a rule of the original
system and therefore the proof scheme is valid in the original system. Otherwise, d/
is the terminal rule of the proof scheme for ip and hence ip = 4> and this rule applies in
S so that -i0 is not in 5. Considering the premise of the rule, VjP(i?j), and knowing
that the premise is in S since this rule applies, we have that one of the P{Ri) is in S.
This is due to the system being constructive. Thus, the rule r, applies in 5. Hence,
the proof scheme p^ applies in S. In either case, we see that any proof scheme in the
new system is replaceable by a proof scheme in the original system. Therefore S is
an extension of U. Thus,

€(U) = £(U + ).

However, the requirements on the rule system need not be this strong, to
ensure that the sets of extensions will be the same. As in the classical case, there are
other options.

Theorem 3.2.42 Let U =< U, N > be a locally finite constructive nonmonotonic
rule system such that the intersection of all the extensions of the system is terminally
restraint-free. Suppose that a sentence (ft € U appears in every extension of the
system. Let d^ be the standard common derivation of 4> as defined in Theorem
3.1.24. Then,

€{U) = S(U + ).

Proof: Suppose that U is a locally finite constructive nonmonotonic rule
system such that the intersection of all the extensions of the system is terminally
restraint-free. Suppose further that a sentence </>£[/ appears in every extension of

58

the system. Let d^ be the common derivation of 4> as defined in Theorem 3.1.24.
Then, since the last rule of each proof scheme for (f> are the only rules considered in
the construction of the common derivation, we may follow the proof of the case of
the system U itself being locally finite, constructive and restraint-free.

Theorem 3.2.43 Let U =< U,N > be a locally finite constructive nonmonotonic
rule system such that the intersection of all the extensions of the system is terminally
normal. Suppose that a sentence (f> € U appears in every extension of the system. Let
d^ be the standard common derivation of 4> as defined in Theorem 3.1.24- Then,

£{U)=S{U + ).

Proof: Let U =< U, N > be a locally finite constructive nonmonotonic rule
system such that the intersection of all the extensions of the system is terminally
normal. Suppose that a sentence <f> € U appears in every extension of the system.
Let dj, be the standard common derivation of <j> as defined in Theorem 3.1.24. Then,
since the last rule in each proof scheme for (f> are the only rules considered in the
construction of the common derivation, we may follow the proof of the case of the
system U itself being locally finite, constructive, and normal.

Also as in the classical case, we may instead place the requirements on the
sentence in question, i.e., on the sentence (p which appears in every extension of the
system.

Theorem 3.2.44 Let U =< U,N > be a locally finite constructive nonmonotonic
rule system such that a sentence € U appears in every extension of the system
and is terminally restraint- free. Let d/ be the standard common derivation of <j) as
defined in Theorem 3.1.24. Then, d/ is restraint-free and

£{U) = E{U + ).

59

3.3 Characterizing The Set of Extensions

In the infinite case, we must again ask if, given a family of sets, we can con-
struct a nonmonotonic rule system whose set of extensions is exactly that family. The
unfortunate answer to this is no. Although the set of extensions of any nonmonotonic
rule system is noninclusive (i.e., forms and antichain), not every noninclusive family
of sets is the set of extensions for a nonmonotonic rule system. Consider the following
example.

Example 3.3.18 Let S = {{u t } : % = 0, 1, . . .}, that is, the family of all singleton
sets. This is clearly noninclusive. Now suppose that £ were the set of extensions of
some rule system U. For the extension {u }, there must be a proof scheme (j) with
finite support S and conclusion u . Now just choose some u k <fc S. Then (j) is also
applicable in {u k }, which means that {u k } is not deductively closed and is not an
extension.

The remedy for this is explored in the next chapter, where we are able to
successfully characterize the set of extensions of a locally determined nonmonotonic
rule system.

CHAPTER 4
LOCALLY DETERMINED NONMONOTONIC RULE SYSTEMS

In this chapter we consider a special nonmonotonic subcase of the infinte case.
We let U be a locally determined nonmonotonic rule system. This has a large impact
on the results.

4.1 Locally Determined Nonmonotonic Rule Systems

We may obtain a similar result in the case of U having infinitely many exten-
sions, if we make some adjustments. To do this, we will first need some preliminaries.

Let U =< U, N > be a countable nonmonotonic rule system. For simplicity,
we will assume that U is a subset of the natural numbers u) = {0,1,2,...}. Let
U = {u < t*i <... }.

The informal notion of a locally determined rule system U is one in which the
existence of a proof scheme for a sentence u { (or the lack of existence thereof) can be
determined by examining only rules or proof schemes involving some initial segment
oft/.

Given a proof scheme or a rule p, we write max(p) for the maximum of the
set {tjttj occurs in <j>}.

We shall write JV n for the set of all rules r e N such that max(r) < n and we
shall write U n = {u ,...,u n } and refer to the nonmonotonic rule system made up of
these two sets as < U n ,N n >.

Definition 4.1.45 Let U =< U,N > be a computable nonmonotonic rule system.
Say that n is a LEVEL ofU if for every subset S C {u , ...,«„} and alii < n + 1, if

60

61

there exists a proof scheme p such that the conclusion c(p) = U{, and R(p) 5 = 0,
then there exists a proof scheme q such that c(q) = Ui, R(q) C\ S — 0, and max(q) <
n+l.

Theorem 4.1.46 Suppose that n is a level ofU and suppose that E is an extension
ofU. Then, £ n = E D {u ,..., u n } is an extension of < U n , N n >.

Proof: Suppose that n is a level of U and suppose that E is an extension of
U. Let £ n — EC\{u ,..., u n }. Since E is an extension of U, then for any u, G £ n , there
is a proof scheme p such that the conclusion of p is Ui and the support of p has empty
intersection with E. Thus, in particular, the support of p has empty intersection with
£ n so that since n is a level, there exists a proof scheme p Q such that max(po) < n,
Po has conclusion Ui and the support of p has empty intersection with £ n . Thus, p
is a proof scheme of < U n ,N n >. In the other direction, if i < n and tt t £ £ n , then
there can be no proof scheme p of < U n ,N n > such that p has conclusion u t and
the support of p has empty intersection with £ n since this would violate the fact
that E is an extension of U. Thus, £ n is an extension of < U n , N n >.

Corollary 4.1.47 Suppose that n is a level of U and suppose that S C {u ,...,u n }
is not an extension of < U n , N n >. Then, there is no extension of E inU such that

ED{u ,...,u n } = S.

Further, say that U is LOCALLY DETERMINED or "has levels" if there
are infinitely many n such that n is a level of U. For a system U that is locally
determined, we let lev(U) be the set of all n such that n is a level of U and we write
lev(U) = {l < h <••• }•

Suppose that U is a recursive nonmonotonic rule system. Then we say that
the system HAS EFFECTIVE LEVELS if there is a recursive function / such that
for all i, f(i) > i and f(i) is a level of U.

62

Recall that a nonmonotonic rule system is said to be LOCALLY FINITE if for
each (j> e U, there are only finitely many ^-minimal proof schemes with conclusion <f>.
Recall also that U is HIGHLY RECURSIVE if U is recursive, locally finite, and there
is an effective procedure which, when applied to any (f> € U, produces a canonical
index of the set of all codes of -<-minimal proof schemes with conclusion <j>.

Now, a nonmonotonic rule system U that is locally determined is not neces-
sarily localy finite, but the set of extensions of U will nevertheless correspond to the
set of infinite paths through a finitely branching tree and similarly the extensions of
a nonmonotonic rule system with effective levels will correspond to the set of infinite
paths through a highly recursive tree.

Theorem 4.1.48 Let U =< U,N > be a recursive nonmonotonic rule system.

1. If U has levels, then there is a recursive finitely branching tree T and a
one-to-one degree preserving correspondence between the set of extensions S(U) ofU
and the set [T] of infinite paths through T. Also,

2. If U has effective levels, then there is a highly recursive finitely branching
tree T and a one-to-one degree preserving correspondence between the set of extensions
£(U) ofU and the set [T] of infinite paths through T.

Proof: There is no loss of generality in assuming that U — w so that u Q =
0,«i = 1,« 2 = 2,.... Next, observe that for each n, < U n ,N n > has only finitely
many minimal proof schemes so that we can effectively list all of the minimal proof
schemes p < p x <..., so that

1. If max(p k ) = i and max(p t ) = j and i < j, then k < I. (This says that if
i < j the the proof schemes whose maximum is i come before those proof schemes
whose maximum is j.)

63

2. If max(p k ) = max(pi) = i,k < I if and only if code(p k ) < code(pi) where
code(p) denotes the index assigned to a proof scheme p under some effective Godel
numbering of the proof schemes.

We shall encode an extension S of the system U by a path n s — (ftfe, jrj,...)
through the complete w-branching tree lu" as follows.

first, for all i > 0, n 2 i =s (*)• That is, at the stage 2i we encode the information
if i belons to the extension S. Next, if n 2 i = then 7r 2i+ i = 0. But if 7r 2i = 1 that
is, if t 6 S, then we put ir 2 i+i equal to that g 5 (z) such that p 9sW is the first minimal
proof scheme in our effective list of minimal proof schemes such that the conclusion
of p gs (j) is i and the support of p qs (i) has empty intersection with S.

Clearly S is turing reducible to ir s . For it is enough to look at the values of
its at even places to read off S. Now, given an 5-oracle, it should be clear that for
each i e 5, we can use an 5-oracle to find q s (i) effectively. This means that ir s is
turing reducible to S. Thus, the correspondence between S and n s is an effective
degree-preserving correspondence. It is trivially one-to-one.

The construction of a recursive tree T C lu" such that the set [T] of infinite
paths through T equals the set {n s \S is an extension of < U, N >} is given in
[MNR92b]. The key fact needed to establish the branching properties of the tree T
is that for any sequence a G T and any i, either a(2i) = a(2i + 1) = or a(2i) = 1
and a(2i + 1) codes a minimal proof scheme for i. We just note that when a proof
scheme p = a(2i + 1) does not correspond to a path tt s , then there will be some k
such that a has no extension in T of length k. This will happen once we either find
a smaller code for a proof scheme or we find some u > i in the support of the proof
scheme p such that all possible extensions r of a have t(2u) = 1.

Let L k = max({i\max(pi) < k}). It is easy to see that since the system U is
a recursive nonmonotonic rule system, we can effectively calculate L k from k.

64

We claim that the tree T is always finitely branching and that if the system
U has effective levels, then the tree T is highly recursive.

Clearly the only case of interest is when 2i + 1 < k and a(2i) = 1. In this
case we will let a(2i + 1) — c where the conclusion of the proof scheme p c is i and
the support of p c has empty intersection with l a and there is no a < c such that the
conclusion of the proof scheme p a is i and the support of p a has empty intersection
with I a . Now suppose that q is a level and that i < q. Then, by definition, there
must be a minimal proof scheme p such that max(p) < q, the conclusion of p is i and
the support of p has empty intersection with I a . Thus, p = p t for some I < L q . It
follows that c < L q where q is the least level greater than or equal to i. Thus, the
tree T is always finitely branching. Now, if the system U has effective levels which is
witnessed by the recursive function /, then it will always be the case that c < L f(l}
so that the tree T will be highly recursive.

Corollary 4.1.49 Suppose that the system U is a nonmonotonic rule system with
levels such that there are infinitely many n such that < U n ,N n > has an extension
S n . Then U has an extension.

Proof: Now consider the tree T constructed for the system U as in Theorem
4.1.48. Here we can again construct our sequence of minimal proof schemes p ,Pi, ...
recursive in U just as we did in Theorem 4.1.48. We can only conclude that the tree
T is recursive in the system U but in any case, T will be a finitely branching tree.

Now fix some level n and consider some m > n such that < U m ,N m >
has an extension S m . Then by the exact same arguement as in Theorem 4.1.46,
S n = S m n {0,...,n} will be an extension of < U n ,N n >. Now consider the node
a = (ct(0),... , a(2n + 1)) such that

1. a{2i) =0iU^S n ,

2. a{2i) = 1 ifieS n ,

65

3. cr(2z + 1) = if cr(2i) = 0,

4. a(2i + 1) = c where c is the least number such that max(p c ) < n, the
conclusion of p c is t and the support of p c has empty intersection with S n .

It is easy to see from our constuction of the tree T that a eT. It follows that
T is infinite and hence T has infinite path tt by Konig's Lemma. But the proof of
Theorem 4.1.48 shows that S^ is an extension of the system U.

The next quesion is how to ensure that we can get recursive extensions. In
particular, suppose that the set of extensions of U is a decidable n°i class, that is, the
set of infinite paths through a computable tree T with no dead ends. Then every node
of the tree has an infinite recursive extension. If we only assume that the set of dead
ends of T are computble, the any node with an infinite extension has a computable
infinite extension.

Definition 4.1.50 1. Let U =< U,N > be a nonmnotonic rule system with levels.
Suppose that {n\n is a level of < U, N >} = {/ < l x <...}. Then we say that
U HAS THE LEVEL EXTENSION PROPERTY if for all k, if S k is an extension
°f < Ui k ,N ik >, then there is an extension of S k+ i of < U tk+1 ,N lk+l > such that

Sk+i n {u ,...,ui k } = s k .

2. A level n ofU is a STRONG LEVEL ofU if for any level m < n ofU and
any extension S m of < U m ,N m >, if there is an extension S n of < U n , N n > with
S n ^ {uo,...,u m } = S m , then there is an extensions ofU with SCi {u ,..., u m } = S m .

3. U HAS STRONG LEVELS if there is a computable function f such that,
for each i, i < f(i) and f(i) is a strong level.

The level extension property provides a way to construct an extension of the
system U by extending from level to level. The following result is immediate:

66

Theorem 4.1.51 If E is a decidable U°i class of subsets ofU with levels, then there
exists a nonmonotonic rule system U with the level extension property such that E is
the set of extensions ofU.

Definition 4.1.52 We say that a recursive nonmonotonic rule system U with levels
has IMMEDIATE WITNESSES if for any levels m < n ofU, whenever there is a
set V C {u ,...,u m } such that there is an extension S ofU with Sn {u ,...,u m } = V
but there is no extension S ofU such that S fl {u ,...,u m+l } = V, then either

(i) there is a proof scheme p with max(p) < m such that the support of p is a
subset of {u ,...,u n } - V and p concludes u n+1 or

(ii) there is a proof scheme p with max(p) < n + 1 such that the support of p
is a subset of {u Q ,...,u n ,u n+1 } -V and the conclusion of p is in {u ,..., u n , u n+l } - V
(note: it is easy to see that it must be the case that u n+ i is in the support of p.)

One can extend this concept as follows:

We say that a recursive nonmonotonic rule system U HAS WITNESSES OF
DELAY k if for all n whenever there is a set V C {«„,...,«„} such that there is an
extension S of U with S D {u ,...,u n } = V but there is no extension S of U such
that S n {u Q ,..., u n+1 } = V, then either

(i) there is a proof scheme p with max(p) < n + 1 such that the support of p
is a subset of {tio,..., u n ) - V and p concludes u n+ i or

(ii) for all sets T C {u n+2 ,..., u n+k }, there is a proof scheme p T with max{p T ) <
n + k such that the support of p T is a subset of {u ,...,u n+k } - (T U V) and the
conclusion of p is in {u ,..., u n+k ) - (T U V).

Theorem 4.1.53 Suppose thatU is a recursive nonmonotonic rule system which has
witnesses of delay k for some k>\ and which has at least one extension. Then the
lexicographically least extension of the system is recursive.

67

Proof: We can construct the lexicographically least extension S of the system
U by induction as follows.

Suppose that for any given n we have constructed S n = SO {u ,..., u n ). Then
S n+ \ — S n unless either

(i) there is a proof scheme p of level n such that the support of p is a subset
of {u ,..., u n } — S n and p concludes u n+ i or

(ii) for all sets T C {u n+2 ,...,u n+ k}, there is a proof scheme p T of level n + k
such that the support of pr is a subset of {u ,..., u n+k } - (TuS„) and the conclusion
of pis in {u ,...,u n+k } - (TuS n ).

in which case S n+1 = S n U {u„+i}. Note that since there are only finitely
many minimal proof schemes of any given level, we can check conditions (i) and (ii)
effectively. Since there is an extension, it is easy to see that our definitions insure
that S n is always contained in the lexicographically least extension of the system U.
Thus, S = [j n S n is recursive.

By putting suitable effective bounds on the effective levels and/or the effective
witnesses, one can readily come up with conditions that force U to have exponential
time, NP, or P-time extensions. This is a topic of current research.

4.2 Common Derivations in Locally Determined Nonmonotonic Rule Systems

Let U =< U,N > be a nonmonotonic rule system with infinitely many ex-
tensions that is locally finite, locally noninclusive and locally determined (or "has
levels"). Suppose (j> G U appears in every extension of U. Since the system is com-
putable, we have that <j) - Uj for some j. Since the system is locally determined,
there are infinitely many n such that n is a level of U so that we may choose n{4>) to
be the least level of U greater than or equal to j. When we restrict each extension
of the system to the set {u Q , ...,u nW } we then have only finitely many extensions,

68

call them Si, ..., S m . Then we may define a common derivation df for in U to be
df =: {0 1 A.../\0 m \Vi,0 i € {«o, •• .,««(*)}, A £ £}/0-
We then get a result similar to the finite case.

Theorem 4.2.54 Let U =< U,N > be a locally finite, active nonmonotonic rule
system that is locally noninclusive and locally determined. Suppose a sentence \$ € U
appears in every extension of the system. Let df be the common derivation of as
defined above. Then, df applies in every extension of the system U and

£(U)=£(<U,Nu{dt}>).

Proof: Let U =< U, N > be an active nonmonotonic rule system with in-
finitely many extensions. Suppose a sentence 6 U appears in every extension of
the system. Let d* be the common derivation of <f>. Consider an arbitrary restricted
extension Sj of U. Take any conjunction &A...AJ0U such that ft g S { . Then \$ £ S 3
so that the conjunction \$ x N...l\\$ m in not in 5,- and thus none of the restraints of d* is
in S y Thus df applies in Sj and therefore it applies in every extension of the system
U. By Theorem 3.1.28 we now have that

£(U)C£(<U,Nu{dt} >).

To show that we retain the same set of extensions, we need only to show the reverse
inclusion. Suppose that So is a restricted extension of < U, JVU {d?} > that does not
extend to an extension of U. Then, since U is locally noninclusive, there exist %p u ...,
^ m € S such that ^ g 5,. Then, ViA...AV> m is a restraint of d* Thus, q^A-A^m
is not in 5 since df applies in 5 , a contradiction. Thus,

£(U) = S(<U,Nu{dt}>).

69

The other notions of the common derivation, the standard common derivation,
the refined common derivation and the strong refined common derivation developed
in chapter three still apply in locally determined nonmonotonic rule systems by the
same theorems.

4.3 Characterizing the Set of Extensions

In this section, we shall provide a characterization of the set of extensions of a
recursive locally determined nonmonotonic rule system U = (U,J\f). Let £{U) be the
set of all extensions of U. Our first observation is that S(U) is closed in the natural
topology on subsets of U.

Proposition 4.3.55 IfU is locally determined, then £{U) is a closed set.

Proof: Let Ei,E 2 , ... be a sequence of extensions with limit E in the usual
product topology on sets. Suppose that Ui G E. Then there must be some K such
that Ui G Ek for all k > K. Thus for each k > K, there is proof scheme 6 k such that
E k C\supp(9 k ) = 0. Now let / be the least level > i. Then since U is locally determined,
it follows that for each k > K, there is a proof <f> k of U\ such that E k n supp(<p k ) = 0.
But there are only finitely many possible support sets for such proof schemes in Ui
so that infinitely many of the (j) k have the same support S. However since SnE k =
for infinitely many k, it must be the case S D E = and hence u r e E. Vice versa,
suppose that u, ^ E. Thus there must be some K such that for all k > K, Ui £ E k .
Suppose by way of contradiction that there is proof scheme (f> with cln{(j)) — Ui and
supp(4>) = S with S n E = 0. Since S is finite, there must be some M such that
S n Em = for all m> M. But this would mean that </> would witness that u t G E m
for all m> M, contradicting our previous assumption. (This direction applies even
if U is not locally determined.) We should note however that, in general, £(U) is not
a closed set.

70

Example 4.3.19 Let U = {u , u u . . .} and let N consist of the following set of rules:
{ 2 u^ '■ Vn, A;}. This means that for any extension E and any k, u k 6 E if and
only if at least one of the set {u 2 k {2n+1) : n = 0, 1,. . .} is not in E. It is not hard to see
that for any k, there will be an extension E ofU which contains all of {u , u u ...,u k }.
Thus if £(U) were closed, then U itself would be an extension, which is clearly false,
since none of the rules are U -applicable. Hence £{U) is not a closed set.

We say that a family £ of sets is noninclusive if for any two sets A, B £ £ ,
neither A C B nor B C A. A second key property of the set of extensions of a
nonmonotonic rule system is that it must be noninclusive. That is, the following
holds.

Lemma 4.3.56 For any nonmonotonic rule systemU, the set £(U) is noninclusive.

However, not every noninclusive family of sets can be the set of extensions of
a nonmonotonic rule system. Recall the example given previously.

Example 4.3.20 Let £ = {{ Ui } : i = 0, 1, . . .}, that is, the family of all singleton
sets. This is clearly noninclusive. Now suppose that £ were the set of extensions
of some rule system U. For the extension {u }, there must be a proof scheme <j>
with finite support S and conclusion u . Now just choose some u k <£ S. Then 4> is
also applicable in {u k }, which means that {u k } is not deductively closed and is not
an extension. Thus £ = {{u t } : i = 0, 1, . . .} cannot be the set of extensions of a
nonmonotonic rule system.

However, by combining the two ideas of closure and noninclusivity, we can
define a condition which guarantees that a family of sets is the set of extensions of a
nonmonotonic rule system U = (U, N) with strong levels. Given a family S of subsets
of U, let S n = {E n {u , ...,u n }:EeS}.

71
Definition 4.3.57 Let S be a family of subsets ofU.

1. We say that n is a level of S if S n is mutually non-inclusive.

2. We say that S has levels if there are infinitely many n such that n is a level of
S.

3. We say that S has effective levels if there is a recursive function f such that,
for all i, f(i) > i and f(i) is a level of S.

There are many examples of families of set of U = {u , tti, . . .} with effective
levels. For example, consider the family S of all sets S such that for all n, \S n
{u , . . . , i4 2 "}| = n. It is easy to see that for all n, 2" is a level of S. For a more
general example, let U be the set of all finite truth table functions on a countably
infinite set {a ,a u ...} of propositional variables. That is, for each sentence ip of
propositional calculus, U contains exactly one sentence logically equivalent to %jj.
These are listed in order of the maximum variable a k on which the sentence depends.
Thus, uq and U\ are the (constant) True and False sentences; u 2 and u 3 are the
sentences a Q and -*a , u 4 , . . . ,ui 5 list the sentences depending on a and a l5 and so
on. Now let T be any consistent set of sentences and let S(F) be the set of complete
consistent extensions of T. The levels of 5 = {UnS : 5 6 S{T)} are just the numbers
2 2 - 1. This is because if two sets in S disagree on the first 2 2k - 1 sentences, then
there must be some t with i < k such that they disagree on a t , which means that
one of the sets contains a* but not ->ai whereas the other set contains -la, but not a { .
Thus the two sets are mutually noninclusive.

Theorem 4.3.58 IfS is a closed family of subsets ofU with levels, then there exists
a nonmonotonic rule system U with strong levels such that S is the set of extensions
ofU. Furthermore, if S is a decidable Yl\ class and has effective levels, then U may
be taken to have effectively strong levels.

72

Proof: Given the family S, we shall directly construct a nonmonotonic rule
system U = (U, N) such that 8{U) = S. First, if S is empty, we let N consist of the
single rule *** It is easy to see in this case that U has no extensions.

Thus we assume that S / and hence that each S n is nonempty as well. We
then create a set of rules for every level n of S. For each level n, let E?, . . . , ££ be the
list of all sets of the form En{u Q , ...,u n } for E £ S. Then for each such E? and each
c € £?, iV will contain a rule r t<c = &*£&* w h e re {A, . . . , m } = {u , ...,u n }- Ef.
It is then easy to see that each E™ is an extension of U n and that n is a level U n .
Moreover it easily follows that the set of extensions of U is exactly S.

For the second part of the theorem, we use the same nonmonotonic rule sys-
tem. We note that decidable n°-class of sets has the property that there is a highly
recursive tree T contained in {0, 1}* such that set of infinite paths through T corre-
spond to the characteristic functions of elements of S and T has no dead ends, i.e.
every node 77 € T can be extended to an infinite path through T. Thus for any level
n of <S, the sets EJ, . . . , E% n described above will just correspond the set of nodes of
length n in the tree T. Because each of the nodes of length n can be extended to
infinite path through T, it follows that each extension E? of U n can be extended to
an extension E of U such that £n{%...,u„} = £f. It then easily follows that n
is a strong level of U. Thus U will have effectively strong levels in this case.

One problem for nonmonotonic rule systems is to determine which sets of
extensions can possibly result from some (recursive) nonmonotonic rule system. It is
well known that any set of extensions must be mutually non-inclusive, that is, if S
and So are two different extensions of the system U, then we can never have S C Sq
or S C S. We will sometimes say that the set of extensions has the noninclusive
property. On the other hand, for infinite languages, not every mutually non-inclusive
set of extensions can be realized.

73

Nonmonotonic rule systems with levels may be used to produce a large family
of possible sets of extensions. Given a family F of subsets of U, let F n = {S n
{u ,..,u n }\S G F}.

Definition 4.3.59 Let F be a family of subsets ofU.

1. We say that n is a level of F is F n is mutually noninclusive.

2. We say that F has levels is there ar infinitely many n such that n is a level
ofF.

3. We say that F has effective levels is there is a recursive function f such
that for all i,i < f(i) and f(i) is a level of F.

Theorem 4.3.60 1. If F is a closed family of subsets of U with levels, then there
exists a nonmonotonic rule system U with levels such that F is the set of extensions
ofU.

2. If F is a decidable U°i family of subsets of U with effective levels, then
there exists a nonmonotonic rule system U with effective levels such that F is the set
of extensions ofU.

Proof: Observe that if F is empty, then there is a single rule : {u }/u with no
extensions. Thus, we may assume that F is nonempty and thus each F n is nonempty
as well.

For each level n, we have a set of rules as follows. For each S x , ...,S k list the
family of intersections Sn{%... , u n ] for S € F. For each partial extension Si and
each c € Si, we will have a rule r i|C with conclusion c and with restraints b for each
b <£ S t . We know that the rule r i<c must exist since Sj is not a subset of S z by the
noninclusive property.

74

We claim that each level n of the family F is also a level of the proof system
U. To see this, let T be a subset of {u ,..., u n }, let j < n and let p be a proof scheme
with conclusion Uj = en such that p has empty intersection with T. Since the rules
of U have no premises, we may assume that p consists of a single rule r = r^ c . Thus,
there exists S € F and a level m of F such that S t = S n {u ,...,u m }. If m < n,
then max(p) < m < n as desired, on the other hand, if m > n, then consider the
restricted partial extension S k = S D {u ,...,u n } and the corresponding rule r k}C as
level n. It is clear that S k C Si and that S k fl {u Q ,..., u n } = 5^0 {u Q ,...u n }. Then the
support of the rule r fc)C has empty intersection with 5 since max(rk, c ) < n and the
support of r fe)C is equal to {u ,..,u n } - S k = {u ,..,u n } -Si which is a subset of the
support of r, )C .

We need to show that the set of extensions of U is exaclty F. Let us first show
that any element S of F is an extension of the system U. Given c e S, the rule r ic
has no premises, has restraints all not in S, and has conclusion c. Thus, c has a one
line proof scheme. Next suppose that 5 admits some rule r — r k>d . If k = z, then of
course d G S, so that S is closed under r. If k ^ i, then by the noninclusive property,
r has a restraint b G Si - S k so that 5 does not admit r.

Next we show that U has no other extensions. Let S* be any extension of U
and suppose that it differs from each S £ F. Since F is closed, there must be a level
n such that S* = S* D {«<>,..., u n } differs from each SC\{u ,..., u n }. Otherwise, there
would exist, for each level n, some S n such that S* fl {u ,...,u n } = E n n {u ,...,u n }.
But then S* = lim n E n would be in F, since F is assumed to be closed.

Now there is at least one S G F n and S* * / S by the choice of n. Furthermore,
5* n {u ,..., u„} is an extension of < U n , N n >, by Theorem 4.1.48, since < U n , N n >
has levels. It follows that S* n {u ,...,u n } is not a subset of S, by the noninclusive
property, so that 5* n {w ,..., u n ) is nonempty. Thus there is some c GS* fl {u ,..., u n )

75

and therefore S* * admits some rule r i)C where S{ is an extension of F n . But then it
follows that S* n {uo,-..,u n } C Si, again violating the noninclusive property.

Suppose now that F is a decidable 11° i class and has effective levels. Then
for any n, we can compute the set of intersections S D U n for S G F, which will just
be the set of paths of length n in the tree T. Now we can effectively compute an
increasing sequence of levels and use the set of intersections S fl U n to compute the
rules at level n for the desired rule system.

It follows from the proof of Theorem 4.1.46 that if U has effective levels and has
the level extension property, then the family of extensions of U may be represented
as a decidable n°i class. Thus, we have the following results:

Theorem 4.3.61 1. Suppose that U is a recursive nonmonotonic rule system with
effective levels and has the level extension property. Then for every level n and
extension S n of < U n ,N n >, there is a recursive extension of S of U such that
Sn{u ,...,u n } = S n .

2. Suppose that U is a recursive nonmonotonic rule system with strong levels.
Then for every level n and extension S n of < U n ,N n >, if there is an extension S of
U with S n {uo,...,u n } = S n , then there is a recursive extension of S ofU such that
SD{u ,...,u n } = S n .

CHAPTER 5
COMPUTABILITY AND COMPLEXITY ISSUES

In this chapter we show a few results concerning the computability and com-
plexity of the common derivations and their associated nonmonotonic rule systems.

5.1 Finding Extensions with Low Complexity

Throughout this section, we shall assume that U = (U, N) is a recursive
nonmonotonic rule system such that U = u = {0,1,2,- ■ ■}. Moreover if U is locally
determined, we let {/ < h < . . .} denote the set of levels of U and if U has strong
levels, then we let {s < 5] < • • •} denote the set of strong levels of U.

In this section, we shall show how we can use the notions of levels and strong
levels to provide conditions which will ensure that U has an extension which has
relatively low complexity even when U is infinite. We shall distinguish two represen-
tations of U, namely the tally representation of U, Tal(U), and the binary represen-
tation of U, Bin(U). In the tally representation of U, we shall identify each n e U,
with its tally representation, tal(n), and in the binary representation of U, we shall
identify each natural number n with its binary representation, bin{n). Given a rule
r = Ql '-"' Q "^i'-" fl m > we let the tally and binary representations of r be given by

tal(r) = 2tal{a l )2---2tal{a n )3tal((3 1 )2---2tal{l3 m )3tal{ip) (5.1)

bin{r) = 2bin{a l )2 ■ ■ ■ 2bin(a n )3bin{p l )2 ■ ■ ■ 2bin(/3 m )3bin{tp) . (5.2)

We then let Tal{N) = {tal(r) : r e N} and Bin(N) = {bin{r) : r e N}. Similarly
given a proof scheme r/> = (0 X , . . . , <f> k ), we let the tally and binary representations of

76

77

ip be given by

tal(xp) = 4tal(<j> l )4 ■ ■ ■ 4tal(<f> n )4 (5.3)

bin(tp) = 4bin((/)i)4 ■ ■ ■ 4bin((j) n )4 (5.4)

Finally given a finite set of proof schemes proof T = {ipi, . . . ,tp s }, we let the tally
and binary representations of T be given by

tal(T) = 5taZ(V>i)5---5taZ(V>,)5 (5.5)

bin(T) — 56m(^i)5- • -56m(^ s )5 (5.6)

Definition 5.1.62 We say that the nonmonotonic rule system U is polynomial
time locally determined in tally, if the nonmonotonic rule system Tal(U) =
(Tal(u)), Tal(N)) has the following properties.

1. There is a polynomial time function g such that for any i, g(tal(i)) = tal(l ki )
where ki is the least number k such that l k > i.

2. There is a polynomial time function h such that for any i, h(tal(i)) = tal(Ti)
where Ti is the set of all proof schemes Ui k whose conclusion is i where tal(lk t ) =
g(tal(i))

Similarly we say that U is polynomial time locally determined in binary, if definition
(5.1.62) holds where we uniformly replace all tally representations by binary repre-
sentations. We can also define the notions of U being linear time, exponential time,
and polynomial space in tally or binary in a similar manner.
This given, we then have the following:

Theorem 5.1.63 1. Suppose that U is polynomial time locally determined non-
monotonic rule system in tally which has the level extension property. If I is a
level of U and Ei is extension of Ui such that there is a unique extension E of
U with E {0, ...,/} = Ei, then E e NP.

78

2. Suppose that U is polynomial space locally determined nonmonotonic rule sys-
tem in tally which has the level extension property. If I is a level ofU and Ei is
extension ofUi such that there is a unique extension E ofll with £Tl{0, ...,/} =
Ei, then E £ PS PACE

3. Suppose that U is polynomial time locally determined nonmonotonic rule system
in binary which has the level extension property. If I is a level of U and Ei is
extension ofUi such that there is a unique extension E ofli with £Tl{0, ...,/} =
E u then E <E NEXPTIME.

4- Suppose that U is polynomial space locally determined nonmonotonic rule sys-
tem in binary which has the level extension property. If I is a level of U
and Ei is extension of Ui such that there is a unique extension E of U with
EH {0,. ..,/} = E u then E € \J c > {DSPACE{2 nC ).

Proof: For (1), suppose that / = l t where recall that the set of levels oiU is
{lo < k < . . ■}■ Then for any i > l u consider the level Ik, where g(tal(i)) = tal(l k J.
By the level extension property, it follows that there is an extension of Uu , Ei. , such
that Eu n {0, . . . , /} = E t . Moreover, it must be the case that E n {0, . . . , l ki } = E tk
since otherwise we could use the level extension property to show that there is a
sequence of extensions {E^ : j > k{} such that for each j > hi, Ei is an extension of
U tj where E X] n {0, . . . , lj- X } — E Xi _ x . One can then easily prove that E' = Uj> ki E { . is
an extension of U such that E' n {0, . . . , /} = E t contradicting our assumption that
E is the unique extension of U such that E C\ {0, . . . , 1} = E t .

It follows that to decide if i € E, we need only guess Ei k ., verify that it
is an extension of Ui k , and check whether i G Ei h . We claim that this is an NP
process. That is, we first guess the sequence xe, (0)---X£, (4,) where xe, is
the characteristic function of Eu. Note that our conditions ensure that there is
some polynomial p such that l kt < p(\tal(i)\). It follows that we can compute l ko =

79

g(tal(0)),l kl = g(tal(l)), . . . ,l kl = g(tal(l ki )) in polynomial time in \tal(i)\. Since
for each j, l kj is the least level greater than or equal to j, it follows that l ko < l kj <
■• - hi. = hi- Thus if ki — s, we can find Z < ^1 < • • ■ < Z s is polynomial
time |ta/(t)|. Note by assumption, t < s. Now consider E r = Eu n {0, . . . ,Q for
r = t,t + 1,. . . ,s. By our definition of levels, it must be the case that each E T is
an extension of U lr . That is, if x £ E r , there can be no proof scheme ip oiU lr such
that cln(ip) = x and supp(ip) n E T — since otherwise ip would witness that x € E s .
Vice versa, if x e E r , then x € E, and hence there is a proof scheme 6 X of U is such
that cln(6 x ) = x and supp(6 x ) H E, = 0. But since / r is level, there must be a proof
scheme t/>* such that max(tp x ) < l r , cln(ip x ) = x, and supp{ip x )n(E s n{0, . . . , / r }) = 0.
Thus ip x is a proof scheme which witnesses that x £ E r . Note that if it is the case
that / r _! < x < l r , then ip x in r x . Thus since we can also compute h(tal(0)) =
tal{V ),h{tal(l)) a ia/(r x ),..., h{tal(l ki )) = tal(T llH ) in polynomial time in |ta/(i)|,
it follows that to check that E ik ^ is an extension, we need only verify that that for each
x > U, XE, k (x) = 1 iff there is a proof scheme tp x e T x such that supp(ip x ) n E lk = 0.
It is easy to see that for each such x our codings of proof schemes and rules is such
that we can decode tal(T x ) and check if there is such a i\) x is polynomial time in l ki .
Thus we can verify that E lk , is an extension of W Jfc . in polynomial time in \tal(i)\.
Hence it follows that E € NP.

The proof of part (2) is similar. However since in this case, the length of the
sequence xe,. (0) • • • Xe,. (ki) is bounded p(\tal(i)\) for some polynomial p, we do not
have to guess it. That is, in p(i) space, we check all strings of {0, l} /fc > to see if they are
the characteristic function of an extension E* of U ik such that E* n {0, ...,/} = E t .
Since there is only one such extension with this property, we can search until we find
it. Thus our computations will require only polynomial space.

The proof of parts (3) and (4) uses the same algorithms as in parts (1) and
(2). However in this case the string xe,, (0) • • • xe, (lk { ) may be also long a 2 p d 6in WI>

80

for some polynomial P. Thus the algorithm could take on the order of 2' w " (l) l c steps
in case (3) and require 2l 6in W |c space in case (4).

We should note that if we replace the hypothesis of polynomial time and
polynomial space by linear time and linear space in parts (3) and (4) of Theorem

(5.1.63) respectively, then we get the following.

Theorem 5.1.64 1. Suppose that U is linear time locally determined nonmono-
tonic rule system in binary which has the level extension property. If I is a level
ofU and Ei is extension ofUi such that there is a unique extension EofU with
En{0,...,l} = E h then E e NEXT.

2. Suppose that hi is linear space locally determined nonmonotonic rule system
in binary which has the level extension property. If I is a level of U and Ei is
extension oflii such that there is a unique extension E ofU with E'flfO, . . . ,1} =
Ei, then E <E EXPSPACE.

It is easy to show that we can weaken the hypothesis in Theorems (5.1.63) and

(5.1.64) that there is a unique extension E of U extending E t to the assumption that
there are only finitely many extensions of U extending E\ and obtain the conclusion
that all of the extensions of E t are in the same corresponding complexity classes.
However if we do not make any assumption about the number of extension of U which
extend Ei, then the only thing we can do is try to construct the lexicographically
least extension of E t . One can see that in cases (2) and (4) there would be no change
in the conclusion. However in case (1), the computations could take 2" c steps and in
case (3), the computations could require 2 2 " steps.

Finally we note that similar results can be proven using strong levels instead of
the level extension property. We state the appropriate definitions and results without
proof. Recall that if U has strong levels, then we let {s Q < s x < . . .} denote the set
of all strong levels of U.

81

Definition 5.1.65 We say that the nonmonotonic rule system U has polynomial

time strong levels in tally, if the nonmonotonic rule system

Tal(U) = (Tal(uj),Tal(N)) has strong levels and the following properties.

1. There is a polynomial time function g such that for any i, g(tal(i)) = tal(sk { )
where ki is the least number k such that s k > i.

2. There is a polynomial time function h such that for any i, h(tal(i)) = tal(Ti)
where I\ is the set of all proof schemes U Sk whose conclusion is i andtal{l k% ) =
g{tal(i))

This given, we then have the following.

Theorem 5.1.66 1. Suppose that U is a nonmonotonic rule system which has
polynomial time strong levels in tally. If I is a level ofU and Ei is extension of
Ui such that there is a unique extension EofU with E D {0, ...,/} = E t , then
E e NP.

2. Suppose that U is nonmonotonic rule system which has polynomial space strong
levels in tally. If I is a level ofU and E\ is extension ofUi such that there is a
unique extension EofU with E n {0, ...,/} = E h then E G P SPACE

3. Suppose that U is nonmonotonic rule system which has polynomial time strong
levels in binary. If I is a level ofU and Ei is extension ofUi such that there is
a unique extension E ofU with EC) {0, . . . ,1} = Ei, then E e NEXPTIME.

4- Suppose that U is a nonmonotonic rule system which has polynomial space
strong levels in binary. If I is a level of U and Ei is extension of Ui such
that there is a unique extension E of U with E n {0, . . . , /} = E t , then E 6
U C > (£>SP^C£(2" C )).

82

5.2 Computability and Complexity of Common Derivations

Corollary 5.2.67 Let < U, N > be a highly computable nonmonotonic rule system.
Then, for every sentence G f]E < u,N>, there exists a computable standard common
derivation d^ x that generates (f> in every extension of < U, N >.

Proof: Using the standard common derivation defined as in Theorem 3.1.24,
the definition of the rule yields an algorithm for constructing it. Since the system
< £/, N > is computable we have that each of U and N is computable so that the
set of minimal proof schemes, Dr^ for any sentence tp £ U is computable. Hence the
common derivation is computable.

Theorem 5.2.68 Let < U, N > be a highly exponential time (polynomial time, etc.)
nonmonotonic rule system. Then, for every sentence </> G f]E(< ty N >)> there
exists an exponential time standard common derivation d^ that generates </> in every
extension of < U,N >.

Proof: We use the same definition for the standard common derivation as in
Theorem 3.1.24. Since the rule system is highly exponential time, we have that the
set of minimal proof schemes, Dr^ may be computed in exponential time. Once this
is done, we need only to list the last rule in each proof scheme and construct the
common derivation. Since there are only finitely many proof schemes, the common
derivation may be computed in exponential time, i.e., in the time required to compute
the set of minimal proof schemes.

Theorem 5.2.69 Let < U,N > be a highly computable active nonomonotonic rule
system such that (f) appears in every extension of the system. If d/ is the standard
common derivation for <j> in < U, N > , then < U, N U {d/} > is highly computable
and active.

83

Proof: Suppose < U, N > is a highly computable active nonomonotonic rule
system and <j> appears in every extension of the system. Let d^ 1 be the standard
common derivation for 4> in < U, N > as previously defined, and consider the system

< U, N U {d^} >. This system is active and locally finite by Theorem 3.1.26. To
show that it is highly computable we have left to show that it is computable, and that
there exists an effective procedure which given any sentence ^ G U produces the set
Dr^ of minimal proof schemes for tp. The set U has not changed since we have added
no new sentences to the system by the addition of the common derivation. Thus,
U is still computable. Since the set N is computable, and the common derivation
is computable, we have that the set N U {d/} is computable. Thus, the system

< £/, TV U {d/} > is computable. Since the original system < U, N > is highly
computable we have that there exists an effective procedure which given any sentence
€ U produces the set Dr^ of minimal proof schemes for tp. This procedure is still
effective upon the addition of the common derivation to the set of nonmonotonic
rules of the system. Thus the new system < U, N U {d/} > is highly computable
and active.

Lastly, we find the following results as to the computability of the common
derivations di((f>) and d 2 ((f>).

Theorem 5.2.70 Let < U,N > be a highly computable nonmonotonic rule system.
Then, for every sentence <j> G f)E(< U, N >), the common derivations di(4>) and
^2(0) for 4> are computable.

Proof: The definition of each rule yields an algorithm for constructing it.
Since the system < U, N > is computable we have that each of U and N is computable
so that the set of minimal proof schemes, Dr,p, which is finite for any sentence ip G U,
is computable. Since these rules are constructed from the last rule in each proof
scheme, we have that they are computable.

84

Theorem 5.2.71 Let < U,N > be a highly exponential time nonmonotonic rule
system. Then, for every sentence <f> e f] E{< U, N >), the common derivations d x (0)
and ^(0) for <j> are exponential time.

Proof: Since the rule system is highly exponential time, we have that the set
of minimal proof schemes, Dr^ may be computed in exponential time. Once this is
done, we need only to list the last rule in each proof scheme and construct the rules
di((j)) and d 2 {(j>). Since there are only finitely many proof schemes, the rules may
be computed in exponential time, i.e., in the time required to compute the set of
minimal proof schemes.

Theorem 5.2.72 Let < U,N > be a highly computable active nonomonotonic rule
system such that (f> appears in every extension of the system. If d\ ((f)) is the common
derivation for (j) in < U, N >, then < U,N U {d x {<f>)} > is highly computable and
active.

Proof: Suppose < U, N > is a highly computable active nonomonotonic rule
system and appears in every extension of the system. Let d\(<j>) be the common
derivation for in < U,N > as previously defined, and consider the system <
U,NU {di{4>)} >. This system is active and locally finite by Theorem 3.1.26. To
show that it is highly computable we have left to show that it is computable, and that
there exists an effective procedure which given any sentence ip G U produces the set
Dr,p of minimal proof schemes for xp. The set U has not changed since we have added
no new sentences to the system by the addition of the common derivation. Thus,
U is still computable. Since the set iV is computable, and the common derivation
is computable, we have that the set TV U (di(#)} is computable. Thus, the system
<U,NU {di(<j))} > is computable. Since the original system < U, N > is highly
computable we have that there exists an effective procedure which given any sentence

85

ipeU produces the set Dr^ of minimal proof schemes for ip. This procedure is still
effective upon the addition of the common derivation to the set of nonmonotonic
rules of the system. Thus the new system <U,NU {di(#} > is highly computable
and active.

CHAPTER 6
ALTERNATE FORMALISMS OF NONMONOTONIC LOGIC

In this chapter we will discuss how the results of the previous chapters trans-
late into two particular formalisms of nonmonotonic reasoning, specifically Default
Logic and Logic Programming. We will treat each separately, begining with the nec-
essary definitions and preliminary theorems. Continuing, we show how each theory is
equivalent to nonmonotonic rule systems. Finally we will see how some of the results
of the previous chapters look through the eyes of these formalisms.

6.1 Default Logic: Preliminary Definitions and Theorems

Definition 6.1.73 A DEFAULT THEORY is a pair (D,W), where W is a set of
formulas of the language L, and D is a set of DEFA ULTS of the form d — A :
Bi,...B n /C where A, Bi,...,B n , and C are all formulas of the language L and the
default is interpreted as saying, "If A is true and B x , ..., B n are not contradicted,
then conclude C." Call A the PREREQUISITE of d, and write p(d) = ,4. Call
B x ,...,B n the JUSTIFICATIONS of d, and write J(d) = {B u ...B n }. Lastly, call C
the CONSEQUENT of d, and write c(d) = C. Note that it is possible for a default
to have no prerequisite and/or no justifications.

If E is a set of formulas of the language L, let Cn(E) denote the closure of E
under semantical censequence.

Now, if (D, W) is a default theory, and S is a set of formulas of the language,
call a default d GENERATING FOR THE CONTEXT S if p{d) is in S, and for all
(3 e J(d), -1/8 £ S. Let GD(D, S) be the set of all defaults in D that generate for the
context S.

86

87

Most simply, an EXTENSION, S, of a default theory (D, W) is a set of for-
mulas of the language L that is the smallest set T (by inclusion) such that T is
theoretically closed, i.e., T = Cn(T), T contains W, and for all defaults deD such
that p(d) 6 T and V/? € J(d), -i0 g S, c(d) e T [BDK97J. This last property is best
described as being "closed under the defaults of D."

Example 6.1.21 Let a, b, and c be atoms in the language L and let

D = {: {a}/b,: {c}/b,: {a}/^c,: {c}/^a}.
Then the default theory (D, 0) has two extensions:

S 1 =Cn({b,^c})

and

S 2 = Cn({b,^a}).
Moreover, we find that

GD(D,S l ) = {:{a}/b,:{a}hc}

and

GD(D,S 2 ) = {:{c}/b,:{c}/^a}.

Definition 6.1.74 Now, we may well-order the defaults of a default theory (D,W)
by some well- ordering <. We may then define AD± to be the set of all defaults in D
which are applied when the well- ordering is used to close W under the set of defaults.
This is done in the following way: we define an ordinal n < . For every e < n^ we
define a set of defaults AD, and a default d e . If the sets AD e ,e < a, have been
defined but r/x has not been defined, then

88

(1) If there is no default d E D\ \J £<a AD e such that:

(a) W U c(U,<o AD t ) h p(d)

(b) W U c((J e<Q AD e ) \f -■/?, for every <= J(d)
then rj^ = a.

(2) Otherwise, define d a to be the <-least such default de D\ (J e<Q AD £
Then set AD Q = (\J £<a AD £ ) U {d a }.

(3) Put AD±=\J e<1hi AD t .

Intuitively, we begin with the set W of formulas, and apply the defaults of D in
their order. Each time we apply only the ordering-least default which can be applied.
At some point (since the ordinal is well-defined [MT93]) there will be no available
dafaults that may be applied. At this point, AD± is the set of all defaults applied and
n^ is the number of steps needed to reach this stopping point.

Then, let T< be Cn(W U c{AD^)). This is the theory GENERATED BY <
Then, GD(D,T±) C AD± [MT93] so that

Cn{W U c(GD{D,T±))) C TV

Now, if< is a well-ordering of the defaults and for every (5 in J(AD^), ->/3 \$ Cn(WU
c(AD^)) then T± is an extension of (D,W) [MT93]. That is, if

then

2V = Cn(W U c(AD.<))

is an extension of(D,W). More precisely, ifT± = Cn(W Uc(GD{D,T±))), then T <
is an extension of (D, W). We now have that

(a) If S is an extension of (D, W) then there is some well ordering < of the
defaults in D such that S = Cn(W U c(AD±)) = Cn(W U c(GD(D,S))) [MT93J.
And,

89

(b) IfS = T± = Cn(W U c(AD±)) = Cn{W U c(GD(D, S))) for some well-
ordering :<, then S is an extension of (D, W).

Thus, S is an extension of (D, W) if and only if S = Cn{W U c(GD(D, S))).

Let £(£>,w) be the set of all extensions of the default theory (D, W). Call two
default theories {D u W x ) and (D 2 , W 2 ) EQUIVALENT, written (D u W x ) = (D 2 , W 2 ),
if they have exactly the same extensions, i.e., ifE(Dt,Wi) = ^(d 2 ,w 2 )-

Example 6.1.22 Let (D,W) be a default theory where D = {: /a -> 6} and W =
{a}. Then consider the default theory (£>', 0) where D' = {: /a -> b,: /a}. These two
theories are equivalent as they each have the same single extension

S = Cn({a^b,a}).

Theorem 6.1.75 (MT93) Let S C L. Then, S is an extension of (D,W) if and
only if S is an extension of (D U D , 0), where D = {: /^ g W}.

Proof : Suppose S is an extension of (D, W). Then,

S = Cn{WL)c(GD(D,S)))

which is clearly equal to Cn(0 U c(GD(D U D , S))) since each new default m D
will generate for any context S. Thus, each extension of (D, W) is an extension of
(DUD o ,0). For the same reasons, the converse also holds. Since they have exactly
the same extensions, the theories are equivalent.

Theorem 6.1.76 (MT93) Let S and S' be be two extensions for the default theory
(b, W). Then, ifSC S', then S = S'.

Theorem 6.1.77 (MT93) A default theory (D, W) has an inconsistent extension
if and only if Sent(L) is an extension and (D, W) has no other extensions.

90

!^2 Equivalence of Default Logic to Nonmonotonic Rule Systems

The equivalence of Default Logic to Nonmonotonic Rule Systems has been
shown by Cenzer, Marek, Nerode, Remmel [1999] and others. Here, we give the
equivalence as shown by Cenzer and Remmel. Let C be the prepositional language
underlying the given default logic. With C fixed, all our nonmonotonic rule systems
will have the same universe, namely the set of all well-formed formulas of C. We now
show how to interpret a given default theory as a nonmonotonic rule system.

Let (D, W) be a default theory. For every default rule r,

a: ft,..., ft

r =

7
construct the following nonmonotonic rule d r

. __ a : -"A,.. .,-■&
7
Next, for every formula ip e £, define the rule

d^ = —

i>

and for all pairs of formulas Xi»Xa define

Xl.Xl => X2 ■

m Pxux2 =

X2

Now define the set of rules N DfW as follows:

N D ,w = {d r :r e D} U {d^ : ip e W or ip is a tautology} U {mp XuX2 : X u X2 € £}.
We have the following result:

Theorem 6.2.78 [MNR90] Let (D, W) be a default theory. Then a set of formulas
S is a default extension of (D, W) if and only if S is an extension of nonmonotonic
rule system (U,N DfW ).

Theorem 6.2.78 says that at a cost of a simple syntactic transformation and additional
encoding of logic as (monotonic) rules, we can faithfully represent default logics by
means of nonmonotonic rule systems.

91

6.3 Previous Results Through the Eves of Default Logic

We will consider only those deafult theories which have at least one, but only
finitely many extensions. We will consider both the classical case and the constructive
case.

We consider a single default theory (D, W) that has a finite number of ex-
tensions and where D and W are at most countable sets. We intend to explore the
following question: If & is a formula of the language L, such that <f> € S for every
extension S of (D,W), then what, if anything, can be said about the theory and/or
the formula and its derivation in each extension.

Let D be the set {: /?/# e W}. Let D x = D Li D . We consider now the
default theory (A,0). This theory is equivalent to the original theory (D, W) since
they have the same set of extensions.

Theorem 6.3.79 For any formula <j> of the language L such that (f> 6 f| E (D,w),
has a common derivation d 1 ^ which generates 4> in each extension S of (D, W).

Proof: We prove the theorem using the equivalent default theory (£>i,0).
Then, since (D, W) and (A,0) have the same extensions, the theorem holds for
(£>, W). Suppose that is a formula of the language L such that 4> e f| s (£>i,fl)-
Then G 5 for every extension S of (D x ,0). Since each extension S is of the
form Cn(c(GD(D u S))), there must be, for each S, a finite set of defaults {d s u ...,
d s n } C D 1 such that {cid'i), ..., c(d° n )} h <f>. Define d iM by

i i i

Where we simply omit from the prerequisite conjunction those i for which d s t has no
prerequisite.

Claim 1: For any extension S of (A, 0), d {M € GD{D X U {d (0 , s) }, S)

92

Proof of claim 1: Let S be an extension of (A, 0). Consider d (0>g) as denned
above. Clearly d (0)4) £ JtyU^d^)}. Also,p(d (<M ) = AiPKi) G S since each p(d s ,) is
in S since each d s { must be applied to deduce <\> and therefore each d\ is in GD(D X , S).
Now, for every (3 in J(d {M ) = (J, J(d s l ), -i0 £ 5 since no /? in any of the J{d\) is
in 5. Thus, d (<M € GD(D X U {d (<M },S). Moreover, c(d (0 , s) ) = A*^,) deduces <j>
since {c(d*i), ..., c(d s n )} h 0. Thus, d { ^ } deduces in 5.

Now we have for each extension S of (A,0) a default d ( ^ s) which deduces
in 5. For any sets of formulas J u ...,J m define J x V...VJ m to be the set of disjunctions
{iiV...Vj m : jj £ J u i = l,...,m}. We then define d 1 ^ to be

k k k

V i=1 ?(<W : V i=1 J (^.))/V t=1 c (° ? (M)>

where S 1} ...,S* are all the extensions of the default theory (D, W). Call d ! the
COMMON DERIVATION of in (£>i,0).

Claim 2: d\ £ GD{D X U {d 1 ^}, S) for every extension 5 of (A, 0).

Proof of claim 2: Suppose that is a formula of the language such that <j> £
n s (»i,0)- Consider d 1 ^ as defined above. Clearly d\ £ D x U {d 1 ^}. Also, for any
extension S t of {D u fy, for all /3 in J{d ( ^ Si) ), -i/8 £ \$ so that for every /? 6 J(d : ),
/? = iiV...Vj* where j, € •/(<*(*,*)) for each i = l,...,jfe. Then, for any extension \$
of (A,0), -<0 € 5 t if and only if ->jiA.../\->j k £ Si, which implies that -ijy € \$, a
contradiction. Thus, -.0 £ 5 for any extension S of (A,0). Lastly, p(d 1 ) 6 S for
any extension S of (A,0) since for any S, p{d (M ) £ S and p(dV) = \J s p{d {4> , s) ) .
Thus, (P 4 £ GD(D X U K },S) for every extension S of (£>i,0). Furthermore,
c(d^) = V s c(d(0 ia) ) deduces in every extension 5 since each c(d (0)S) ) deduces 4> in
5. Thus, for any formula of the language such that <f> £ f) E (Dl , 0) , has a common
derivation d 1 ^ which generates in each extension 5 of (A,0). Note that d 1 ^ may
be in D x , but then (D x u{d^},0) will have the same set of extensions as (D U Q), i.e.
(L>i U (d^},0) will be equivalent to (£>i,0). Further note that since d 1 ^ generates

93

for each extension S of (A,0), we have that it generates for the intersection of all
the extensions, that is, it generates for p|£(z>,w)- This completes the proof of Claim
2 and thus of the theorem.

Theorem 6.3.80 For any formula <j) of the language such that e f)T,( D}W) , define
d*^ to be

VMm) : U J (^))/ V c (<w-

s s s

Then, iff 6 GD(A U K }>n*W))-

Proof : Again, we prove the theorem using the equivalent default theory
(£>i,0). Since (D, W) and (2?i,0) have the same extensions, the theorem holds for
(D, W). Let be a formula of the language such that <f) <E f|^(Di,0)- Consider d%
as defined above. Clearly d% e D x U {cf%}. Now, for every extension 5 in E (Dl)0) we
have that d {M generates for S so that p(d (<M ) e S. Thus, p(d\) = \J s p(d {M ) €
fl s (£»i,0)- Lastly, for every extension 5 in E (£ , li0) we have that V/3 6 J(d {M ), ->/3 \$ S.
Now, V/? € /(d%) = |J s J{d {4> , s )),Pe J(d {M ) for some extension 5 so that -./? £ 5.
Hence, -i/3 £ n s (0i,0)- Th us d% generates for f|S(D,,0).

Note that although this new default generates for the intersection of all the
extensions, it may not generate for each particular extension.

Example 6.3.23 Consider the default theory (D,0) where

D = {:{^a}/b,:hb}/a}.
This default theory has two extensions,

Si = Cn({b}),

and

S 2 = Cn{{a}).

94
We have that the formula a V b is in both of the extensions and we find that

d* a vb -■ {-a,^6}/aV6.

By the theorem, d* aX ,b generates for the intersection of the two extensions. However,
it does not generate for either of the extensions Si or S 2 since b £ Si and a € S 2 ,
respectively.

Since there are only a finite number of d\, there are only a finite number of
possible prerequisites, sets of justifications, and consequents of d^ s ). Thus there are
only a finite number of possibilities for d^ tS ) for each extension. Because of this, and
because we assume that the default theory has only finitely many extensions, we are
not in danger of having an infinite disjunction or union in d 1 ^ or in d*^.

Example 6.3.24 The common derivation of any formula (j) in (D, W) may not be
found in D or in W as this example illustrates. Let a, b, c be atoms in the language
and let

D = {: {a}/b,: {c}/b,: {a}hc,: {c)ha}.

Then the default theory (D, 0) has the two extensions

Si=Cn({b^c})

and

Also, as before,

and

S 2 = Cn({b,^a}).

GD(D,Si) = {:{a)lb,:{a}hc}

GD(D,S 2 ) = {:{c}/b,:{c}/^a}.

95

There is no default of D that derives b in both extensions. However, we may condsider
the common derivation of b in (D, 0) which is

d\ =: {a V c}/b V b =: {a V c}/b.

Then, d 1 b will generate b in each of these extensions.

Theorem 6.3.81 E( D ,w) C T,^ D u{d^},w) for any (j) 6 f| s (D,w)- Equivalently, E (Dl)0) C
£(D 1 u{dM,0), f or an y \$ e fl s (»i,0)-

Proof : Suppose that i^ef) £(£>i,0) and suppose further that 5 is an extension
of (A, 0)- Then, (j) € S = Cn(c(GD(D u S))). We want to show that

5 = Cn(c{GD(D 1 u {d 1 *}, 5))).

Now, c(GD(D u S)) C ^(AuK^.S)). Thus, 5 = Cn(c(GD(A,5))) C
Cn(c(GD(Di U {d^},S))). We will now show that c(GD{D l U {d 1 ,},^)) C 5 =
CnWGDpx.S))). Let d 6 GD(D X U {d^.S). If d € A then d € GD{D U S)
so that c(d) € Cn(c(GD(A,5))) = S. If d £ D 1? then d = d J so that c{d) =
V s c K<m)- Now, d (<M generates for 5 so that c(d {<t>tS) ) e c(GD(D u S) and therefore

c(d) = \/ c (<W eS = Cn{c(GD(D 1 ,S))).

s

Thus, c{GD{D l U {d%}, 5)) C 5 so that S = Cn{c{GD{D l U {d\}, S))). Thus, 5 is
an extension of (D 1 U {d%},0), and E (Dl , 0) C E (DlU{d i 0}j0) .

The converse of this theorem is false. As the next two examples show, the
two default theories (D, W) and (Du {d 1 ^}, W) may be equivalent, but need not be.
That is, we may have that E(d,W) = E(Du{#a),W), but this is not always so.

96

Example 6.3.25 Consider the default theory where W = and D = {: {a}/b, :
{c}/b, : {a}/-ic, : {c}/->a}. 77izs de/au/^ i/ieory has two extensions:

Si=Cn({6,-c})

and

S 2 = Cn({b,^a}).

We found previously that b is in each extension and that the common derivation of b
in (D, 0) is

d\ =: {a V c}/b V b =: {a V c}/6.

4<tttn£ tf»w new default to D, we see that (D U {d\}, 0) has the same extensions S {
and S 2) since applying d\ does not prevent the application of any default in D.

Example 6.3.26 Now consider the default theory (D, 0) where

D = { Pl : / Cl ,p 2 : /c 2 , : {^a, -pi}/p 2 , : {-a, ^p 2 }/p u

■ hPi,^P2}/piVp2,Ci Vc 2 : {^Ci,-ic 2 }/a}.
This theory has two extensions,

S x = Cn({p 2 ,c 2 }),

and

S 2 = Cn{{p l ,c l }).
The formula c x V c 2 is in each extension and we find that

rf (ciVc 2 ,si) =P2 : /c 2 ,

and

97

so that

d\ivc 2 = Pi Vp 2 : /d Vc 2 .

77ien, Si andS 2 are both extensions of (DU {d l ClVC2 },®), but this default theory will
have a third extension

S 3 = Cn({p x V pa, d V c 2 , a}),

w/nc/i m noi an extension of the original default theory (£>,0).

In the previous example, S 3 is the only new extension obtained by adding the
common derivation to the set of defaults. To see this, let S be any new extension.
The common derivation must generate for S, else S is an extension of the original
system. Thus, we have that p, Vp 2 € S so that the conclusion c x V c 2 is in S as well.
If pi € S, then c : 6 S by the default p l : /c x so that Si C S, a contradiction to
the noninclusive property of extensions, since S is not an extension of the original
system. Similarly, if p 2 e S, then c 2 e S so that S 2 C S, a contradiction. Thus,
neither of pi or p 2 is in S. Also, if a is not in S, then the rule : {--a, ^p 2 }/pi applies
to conclude p, in S. Thus, ae5. From this we now have that S 3 C S so that S 3 = S
since extensions are noninclusive. Thus, there is only the one new extension.

Suppose we let I x be the intersection of all the extensions of the original default
theory (D,0). Then,

/i = Cn({(p 2 Ac 2 )V(p 1 Acj)}).

We consider the new dafault theory

(D 2 ,<b) = (Du{d\ lVC2 },\$)
and the set I 2 of all conclusions common to every extension of (D 2 ,0). We find that

h = Cn{{( Pl V pa) A ( Cl V c 2 )}) C I x .

98

Moreover, we could choose a conclusion common to all the extensions of
(£) 2 ,0), such as p\ Vp 2 . This has the common derivation

rf2 piv P2 == I" 1 ** V -ipi, ->a V -<p 2 , ->a, -ip x V ->p 2 }/Pi V p 2 .
In any new extension S of

£>3 = (D 2 u{d 2 plVp2 },0)

that was not an extension of the theory (D 2 , 0), this common derivation must apply.
Similarly to the previous arguement, none of pi,p 2 ,c t or c 2 can be in S else we
contradict the noninclusive property of extensions. p\ V p 2 must be in S since the
common derivation applies. Thus, by the common derivation for C\ Vc 2 , we conclude
c\ V c 2 in 5 and thus a is in S by the appropriate rule from D. This is a contradiction
since the common derivation requires that a not be in S. Thus, there are no new
extensions, that is, (Z) 2 ,0) has the same set of extensions as (D 3 ,0).

On a different note, we may force the intersections I x and I 2 to be equal by
taking <p to be /\i\. This appears in every extension of (D,0). Letting

D 2 = (DLl{d 1 4> },®)

We find that if S is any new extension, then (j) € S since the common derivation must
apply (else S is not new). Thus,

h c h = h n S C /j

so that I\ = I 2 .

If we have a formula <f> appearing in some but not all the extensions of (Di, 0),
we may in the same fashion as Theorem 6.3.79 create a common derivation for that
will generate in each extension of (A,0) in which it appears. However, we find
that in this case Theorem 6.3.81 becomes false, as the next example illustrates.

Example 6.3.27 Let (D, 0) be the default theory where

D = {: {-iai, ->a 2 }/a 3 , : {-ia 2 , ->a s }/ai, : {-101, -ia 3 }/a 2 ,

Pi : {«l}/ci,Pa : {a 2 }/c 2 ,a 3 : /p, Vp 2 ,a! : /p u a 2 : /p 2 }.
21Wj £/ieon/ /ias as i/iree 0/ ate extensions,

Si = Cn({a 3 ,p l Vp 2 }),

5 2 = ^({02,^2,^}),
and

53 = CndaxiPua}).

We see that the formula c x V c 2 is in both S 2 and S 3 , but is not in Si. We find the
standard common derivation for c\ V c 2 to be

d l Cl vc 2 = Pi V p 2 : {ai V a 2 }/ci V c 2 .

,4ddma this new default to the set of defaults D, we find that both S 2 and S 3 are
extensions of the new default theory {D U {d\ lVC2 }, 0). S u however, is not an exten-
sion of (D U {d^jvcJJ) since it is no longer closed under the set of defaults. It is
important to note that 5 X is a subset of{a 3 , Pl Vp 2 , Cl Vc 2 } which is an extension of
(DUK C1VC2 },0).

Theorem 6.3.82 Let (£>i,0) be a default theory such that appears in some but not
all of the extensions. Consider d x to be the common derivation of (j> as defined m
Theorem 6.3.79. Then, for any extension S of {D u \$), there is an extension S of
(A U {a%},0) such that S C 5. More specifically, if <f> e S then S = S and if
(j) £ S then S is strictly contained in S.

100

Proof: Suppose that (£>i,0) is a default theory such that tf> appears in at
least one, but not all of the extensions. Consider d x ^ to be the common derivation of
<\> as defined in Theorem 6.3.79. If <fi appears in only one of the extensions of (A, 0),
then the common derivation d 1 ^ is found in D x so that (D x , 0) = (D x U {d l <p}, 0), and
the theorem is trivially true. Thus, suppose that <j> appears in al least two exensions
of the default theory (Dj,0), but does not appear in every extension. Let S be an
extension of (£>i,0). If S is an extension of the new system (D x U {d 1 ^,®), we are
done. Also, if (j> is in S, then 5 is an extension of (D x U {d 1 ^},®) by the proof of
Theorem 6.3.81. Hence, consider the case in which <j> ^ S, and S is not an extension
of the new system (Di U {d 1 ^},©). Since S is an extension of (£>i,0), we have that
S = Cn(c(GD(D u S))). If d 1 ^ does not generate for S, then

GD(D l u{d 1 )P },S) = GD(D u S)

so that

S = Cn(c(GD{D l U{d l 4> },S)))

and thus S is an extension of the new system (D x U {d 1 ^}, 0), a contradiction. Thus,
d 1 ^ generates for S so that

<t> = c{d\) e c{GD{D l \J {d 1 ^}^)).

Let

S x =Cn(Su{d\}))).

Then,

Si = Cn{c{GD{D l U {d\},S)))

= Cn(c(GD(D 1 u{d\},S 1 ).

Thus, Si is an extension of the new system (D x U (d l *},0) and clearly S is strictly
contained in Si. Thus, if S is an extension of (£>i,0) then either S is an extension

101

of (Di U {d 1 ^},®) (in the case of <j) belonging to S) or S is strictly contained in an
extension of (Di U {d 1 ^},©) (in the case of not belonging to S).

Refering to example 6.3.26 we may consider any default theory (D, W), the
equivalent default theory (£>i,0) and the set

h =P|E (Dli0) .

Differently now, for every <f> in this set, we can find a common derivation d 1 ^ which
generates <f> in each extension S of {D u 0). Consider the default theory (D 2 , 0) defined
by

(AU{d%|0€/i},0).

We may then repeat the procedure to find

h = (|£(D 2 ,0),

and if this set is not empty, then we can find common derivations d% for each </> in
I 2 such that d 2 ^ generates in every extension S of (D 2 , 0). Then, we may condsider
the default theory (.03,0) defined by

(£> 2 u{d%|0e/ 2 },0).

We continue this construction by, assuming that D n is defined, let

In = £(D n ,0)-

Then, letting d% be the common derivation for <j> 6 I n in (D n , 0), we define D n+1 to
be

(I> B U{d%|*€/„},0).

This may be continued until 7 fc is empty for some k and/or D k = D t for all / > k
for some fc. This inductive definition begs the following questions. Is there a limit to
this process, i.e., is there always some k for which I k is empty and/or D k = D t for

102

all / > k? Does it make a difference if we work in the case where D and W are finite
versus countably or uncountably infinite? These questions should be investigated.

Example 6.3.28 Consider the first default theory of this section. Using this default
theory, we would find D 2 to be

D U {d\} = {: {a}/b, : {c}/b, : {a}/-*, : {c} ha, : {a V c}/b}.

{D 2 ,0) has the same extensions as (D, 0) so that I 2 = h = {b} and d 2 b = d l b . Then
Dk = D 2 for all k > 1.

Remark 6.3.83 By Theorem 6.3.81, we have thafL( Dl #) C S (D2i 0) C ... C S( Dn ,0) C

S(D n+1 ,0) Q ....

For a finite language, or just a finite i\, we must eventually have /„+] = /„
for some n. At this point, we find that we will have a common derivation in D n for
each <A € /„, when we consider D n to be defined using all of the common derivations
for conclusions in /„_!. (£>„,0) will be said to HAVE common derivations.

Theorem 6.3.84 For any default theory (£>,0) with finite I\ } there exists an n such
that (£> n ,0) has common derivations, and the default theory D n is equivalent to the
default theory D n+ k for any k.

For an alternate approach, we might consider the default theory (A,0) and
the set ii as before. However, we may choose one element from I x and define D 2
to be the default theory

(Au{dV},0)

as seen for Example 6.3.26. We may then consider the set of all formulas which
appear in every extension of D 2 , choose some ip among those, and define D 3 to be

(D 2 U{rf%},0).

103

We may continue in this way, however, we will keep with the previous construction.

Using either construction, and considering the language to be infinite, we still
have that I x D I 3 D /\$ D.... We may define I* to be the intersection of all the /„,
define D* to be the union of all the D n , and define E* to be the union of all the sets
of extensions of the default theories (Z) n ,0). We then find the following:

Theorem 6.3.85 /• = f|E*

Proof: Suppose that <j> G I*. Then, 4> g I n for all n so that e f|E (Dn , 0)
for all n. Letting S be any extension in E* we have that S is an extension of (£>„, 0)
for some n so that <j> e S and thus e f)E*. For the reverse containment, suppose
that we have <j) € f] s *- Then, (f> e S for any extension S e E* so that \$ € 5 for any
extension S of any (D n , 0). Hence, <£ € I n for any n so that (f>ef]I n = r. Thus the
equality holds.

Theorem 6.3.86 E* C E (D . )0) .

Proof: Let 5 6 E* so that 5 is an extension of some (D„,0) and an extension
of any (D m , 0) for m > n. We want to show that 5 is an extension of (D*, 0). To do
this, suppose that a default d G D* generates for S. d G D m for some m, and hence
for some m > n. Since S is an extension of (D m , 0) we have that the consequence of
d is in S so that S is an extension of (D*, 0).

It is known that the extensions of a default theory form an antichain. That
is, if Si C S 2 where 5 X and 5 2 are extensions of the same default theory (D, 0), then
Si = S3. Thus, the extensions of (D, 0), whether or not we take the constructive view,
form an antichain so that, assuming they are distinct, no one extension is contained
in any other extension.

The following results are of general interest.

104
Theorem 6.3.87 f\E {Dt9) D Cn(f[ s (c{GD{D,S)))).

Proof : For every extension S of (D, 0),

f]c(GD(D,S)) C c{GD{D,S)) C Cn(c(GD(D,S))) = S
s

so that f\ s c(GD(D, S)) C S. Thus, since each 5 € £(d,0) is a theory, we have that

f|£(D,0) is a theory so that Cn(f] s (c{GD(D,S)))) C f| E (D,0)-

Theorem 6.3.88 Cn(C\ s (c{GD{D,S)))) C Cn(c(GD(2), f| £(!>,»))) and tfiej, are
eaua/ »/ and only if f] £(d,0) = 5 /or a// extensions S, or

Vd € GD(D,n E (^))' V ^ € J(d),-,/? £ |J E (^)-

Proof: Let c(d) G fls^CD^, S))). Then, d G GD(D,S) for every ex-
tension 5 of (£>,0). Thus, for every extension 5, we have that p(d) G 5 and
V/3 G J(d),^(3 i S. So, p(d) G nS(D,e) and V/3 G J(d),-./? £ f|V»)' Th us,
dG G*D(D,flE(D,0)) so that c{d) G c(GD(D,ri£(o,0)))- Hence,

n(c(GD(D,5))) C c(GD(D,f|E (Di0) ))

5

so that

Cn(f](c(GD(D,S)))) C Cn(c(GD(D,f|E (D , 0) ))).
s

Now suppose that c(d) G c(G'D(D,nE (D)0) )). Then, d G GD(D,nS (Di0) ) so that
p(d) G D s (d,0) and V/? G J(d), -./? £ D 2 (d,0))- This means that p(d) G 5 for every
extension 5, but we do not necessarily have that V/3 G J{d),^/3 ^ (JE (Dj0) , the
necessary condition if we are to have that c(d) G DsW^tA S))). Now, if either
fl £-(£>,0) = 5 1 for all extensions 5, or

Vd G GD(£>,p|E Pi0) ),V/? G J(d),^3 £ U E W0)'

105
then then we would have that c(d) 6 f\ s (c(GD(D, S))). Then,

c(GD(D, f|S(D,0))) C f\(c(GD(D,S)))

s

so that

Cn(f](c(GD(D,S)))) = Cn(c(GD(D, f]j^, m ))).
s

Clearly, if Cn(f] s (c(GD(D, S)))) = Cn(c(GD(D, fl^W)), then either ft E (D , 0) =
5 for all extensions S, or

Vd € G-D(£>,n s (^))> V /? 6 .7(d), -0 £ |J E (^)-

Example 6.3.29 Let D = {: /p,p : {a}/e,p : {^a}/-a}. 77jen, (fee de/aw/£ i/ieory
(D, 0) /ms i/ie extensions

and

so that

and

Then,

but

S l =Cn({p,e})

S 2 = Cn({p, -na})

GD(A5 1 ) = {:/p > p:{a}/c}

GD(D,5 2 ) = {:/p,p:{^ a }/-a}.

GD(D,S l )nGD(D,S 2 ) = {: /p],

GD(D, Si n 5 2 ) = G£>(D, {p}) = D
since p e {p} and neither -.e nor a is in {p}. We then see that

c(GD(D, 50) n c(GD(D, S 2 )) = {p} ± {p, e , ^ a } = c(GD(D, 5, n 5 2 )).

106

Hence,

Cn(c(GD(D, 50) n c(GD{D, S 2 ))) = Cn({p})

? Cn(c(GD(D, S 1 nS 2 ))) = Cn({p, e, -a}).

Corollary 6.3.89 Cn(f) s (c(GD(D,S)))) = Cn(c(GD(D,f]i: {D ^))) ,i/(£>,0) is a
justification free default theory.

Proof : Suppose (Z>,0) is a justification-free default theory. Then, the stan-
dardized theory {D, 0) is also justification-free. Then,

W € GD(D,p|£ (i) , 0) ),V/? 6 /(d), ^3 £ |JE (D , 0)

since there does not exist /? G J(d). Thus,

Cn(f|(c(GD(D,5)))) = Cn(c(GD(D,f|E {/3 , 0) ))).
s

Note that a justification-free default theory is monotone, but the nature of

the result is still interesting.

We now take the case of constructive default theories. Again, consider a
single default theory (D, W) where D and W are at most countable sets. In this
section, we consider the same question about default theories as previously, but use
an constructive approach. That is, we consider that pVq is not derived unless one of p
or q is derived. Before considering the theory as is, we first "standardize" the theory
as we did previously to a more suitable one (£>i,0). This is done in the following
way:

Let D be the set {: /V# e W}. We consider now the default theory (D x , 0)
Where D x = D U D . This theory is equivalent to the original theory (D, W) since
they have the same set of extensions [MT93].

107

Again, we consider 0, a formula of the language L such that € fl^Di,!).
Then e S for every extension S of (A,0)- Since each extension S is of the
form Cn{c{GD(D u S))), there must be, for each S, a finite set of defaults {d s x ,
..., d s n ] C Di such that {c^), ..., c(d s n )} h 0, as before. The advantage of the
constructive approach is that it allows us to view each set of justifications as a single
formula instead of as a set of formulas. Then, instead of asking if each element of the
set of justifications is not contradicted in a context, we would ask if the justification
itself is not contradicted in the context. As it stands, each default d s t has as its
justifications the set {/3\, ...,^ k } for some ff i i formulas of the language. We may
instead let J(d s i) = 0\A ... A/?Y Then ^J(d°i) = -./^ v ... v-./3\ is in the extension
S if and only if -./^ g S for some j, a contradiction since each d s { generates for S.
Thus, we have that for each extension S, -. J(d s l ) <£ S for each *. We find that several
of the theorems of the last section still hold.

Theorem 6.3.90 For any formula (f) of the language L such that 6 f]T, (DM1 ,
has a common derivation d 1 ^ which generates in each extension S of(D,W).

Proof : We prove the theorem as before using the equivalent default theory
(£>i, 0)- Then, since {D, W) and (A, 0) have the same extensions, the theorem holds
for (D, W). Suppose that is a formula of the language L such that 6 n s (o,,0).
Then there must be, for each extension S, a finite set of defaults {d s x , ..., d s n } C D x
such that {c(d\), .., c(d s n )} h 0. Define e/ (<M by

t i i

Claim 1: For any extension S of (£>i,0), d {M e GD(D X U {d i</>tS) } , S)
Proof of claim 1: Let S be an extension of [D u \$). Consider d {M as
defined above. Clearly d {M € D x U {d (0!i) }. Also, p(d {4> , s) ) = AiP(rf'i) € 5 since

108

each p(d s i) is in S since each d s { must be applied to deduce <j> and therefore each d\
is in GD{D U S). Now, for this extension S of {D u 0), -J(d (<M ) = -.(/^ J(rf s 2 )) g 5
if and only if Vj^-A^i)) € 5. Because of the constructive view we are taking, this
cannot happen unless ^J(d s i) G S for some i, a contradiction since -^J{d s i) <£ S
for each i. Thus, ->J{d {M ) £ 5. So, d {M G ££>(£>! U {d (0 , a) },S). Moreover,
c(d (<M ) = Ai c ( rfs i) deduces since {c^), ..., c(d s n )} h 0. Thus, d (0ii) deduces
in 5.

Now we have, as in the last section, for each extension S of {D u 0) a default
rf(0 )S ) which deduces in 5. Define rf 1 ^ to be

V r p(d(*,«)) = V ' J{d {M )l\lc(d ( ^ s) ).

s s s

Call d 1 * the COMMON DERIVATION of in (A,0).

Claim 2: d% G G?D(A U {d%}, 5) for every extension 5 of (A, 0).

Proof of claim 2: Suppose that is a formula of the language such that <j> G
fl s (£>i,0)- Consider d 1 ^ as defined above. Clearly d 1 ^ G D x U {d 1 ^}. Also, for any
extension S of (A,0), -J(dV) = - V, M<m) 6 5 o if and only if f\ s {^J(d iM )) £
S . However, this is if and only tf->J(dfa ao) ) G S , a contradiciton. Thus, ^J(rf ! ) f|
5 for any extension 5. Lastly, p(d x ) G 5 for any extension S of (A,0) since for
any 5, p(d (0i4) ) G 5 and p(d%) = \J s p( d (*,*)) ■ Thus, d 1 * G GD{D l U {d^}^) for
every extension 5 of (Di,0). Furthermore, c(g^) = V s c(d (<M ) deduces in every
extension 5 since each c(d^ f8) ) deduces in 5. Thus, for any formula <j> of the
language such that G fj^Di § has a common derivation d 1 ^ which generates
in each extension 5 of (Z?i,0). Note that d 1 ^ may be in D u but then (D, U {d\, 0)
will have the same family of extensions as (D u 0).

Theorem 6.3.91 For any formula (j> of the language such that (/) G f\E( DtW ), define
d% to 6e V>(<W : A a J(d iM )/\/ a c(d {M ). Then,

<r*€GD(AU{d%},f|E (iW ).

109

Proof : Again, we prove the theorem using the equivalent default theory
[Di,\$). Since (D,W) and (A,0) have the same extensions, the theorem holds for
(D, W). Let be a formula of the language such that (j> G f| s (Di,0)- Consider d*
as defined above. Clearly d% £ D x U {d%}. Now, for every extension 5 in E tDl>0) we
have that d (0>4) generates for S so that p(d (0>s) ) e S. Thus, p(d%) = V s p{d {4><s) ) S
D s (Di,0)- Lastly, we have that -«J(d%) = ^ f\ s J{d {4>>s) ) € f| E (D,vv) if and only if
V s h J ( d (4>,s))) 6 HW)' However, this is if and only if -nJ(d {<f> , So) ) 6 f|£(/W)
for some extension 5 . Then, ^J(d (( ^ o) ) e S , a contradiction. So we have that
-"•/(d**) ^ n s (o,w)- Thus d* generates for f| s (Oi,fl)- Moreover, cid*^ = c^)
generates <j) in every extension so that c(d* c/> ) generates <j> in f\E(D,wy

As before, since there are only a finite number of d s h there are only a finite
number of possible prerequisites, sets of justifications, and consequents of d ((M) . Thus
there are only a finite number of possibilities for d (<M . Because of this, we are not
in danger of having an infinite disjunction or conjunction in d 1 ^ or in d%.

Example 6.3.30 Consider the first default theory of this section. Under this ap-
proach, we let

D - {: a/b, : c/b, : a/->c, : c/->a}.
Then the default theory (D, 0) has the two extensions

and

Also,

and

Si=Cn({6,^c})

S 2 = Cn({b,^a}).

GD(D, S^ = {: a/b, : a/^c}

GD(D,S 2 ) = {:c/b,:c/^a}.

110

Just as before, there is no default of D that derives b in both extensions. However, we
may condsider the common derivation of b in (D, 0) by the methods of this section,
which is

d\ =: a V c/b V b =: a V c/b.

Then, d l b will generate b in each of these extensions. Compare this version of the
common derivation of b in (D, 0) to that of the previous section where we had

d\ =: {a V c]/b V b =: {a V c]/b.

We see that the two methods produce very similar results, but that the common deriva-
tion produced by the constructive methods of this section is computationally simpler.

By the same arguement as in Theorem 6.3.81, we still have that E (D ,w) C
£(DuK },ty) for any (j> £ f\T, {DiW) . Equivalently, E (Cl , 0) C E (DlU{d i 0}0) , for any
<j) G plE( £ , l! 0). Also as before, the converse of this theorem is false.

We may again inductively define the default theory (£> n ,0) as we did in the
last section. Investigations into the questions raised by this definition are further
warented in this setting as the common derivations are computationally simpler and
the constructive view may make the difference in what the answer to those questions
will be.

Example 6.3.31 Considering our basic default theory again, and the methods of this
section, we would find D 2 to be

A U {d\} = {: a/b, : c/b, : a/^c, : c/^a, : a V c/b}.

Again, (D 2 , 0) has the same extensions as (D, 0) so that I 2 = I x = {b} and d 2 b = d\.
Then D k = D 2 for all k > 1 .

Ill

Remark 6.3.92 As in the last section, we will have that 2(^,1) C £(/>,,!) C ...

Q s (Dn!0) C E {Dn+li0) c ....

When considering the finite constructive case, we may be sure that it is still
interesting in the case of default logic by considering the two atom case as we did
before. Here, there is a bit more to say.

When considering a constructive view of the default theories, we must ask to
what extent this view weakens the results. That is, by using a constructive view,
what families of possible extensions are excluded and does this exclusion cause the
subsequent theory to be uninteresting. In this section we examine the case where the
language has two atoms, p and q.

Any extension of a default theory must be a theory in the language. There
are fifteen possible extensions which may be partially ordered by implication. They
are:

Cn(p A q), Cn{p A -. g ), Cn{^p A q), Cn{-^p A -.g), Cn(p), Cn{q), Cn(-p),
Cn{^q), Cn(p <—> q), Cn{p ♦— > -, 9 ), Cn(pV q) , Cn(pV-^q), Cn(-<pVq), Cn(^pV

Now, for any default theory we have that no one extension is contained in
any other extension. This greatly limits the possible sets of extensions that any one
default theory in this language can have. To see the possibilities more easily, we make
the following definitions:

A = {Cn(p V q), Cn(p V -.g), Cn{^p V q), Cn(^p V -- 9 )}

B = {Cn(p), Cn(q), Cn(-ip), Cn(-qf), Cn(p ^-> q), Cn{p f— ► -, g )}

C = {Cn{p A q), Cn(p A -.g), Cn{^p A q), Cn(^p A -q)}

112

Taking the constructive view, we cannot derive any element of A and we
cannot derive either of Cn(p i — ► q) and Cn(p < — ► -<q). Without the constructive
view, there are 141 possible families of extensions coming from:

15 subsets of A, 63 subsets of B, 15 subsets of C, 12 families having one
element from A and one element from B, 6 families having two elements from A and
one element from B, 12 families having one element from B and one element from
C, 6 families having one element from B and two elements from C and 12 families
having two elements from B and one element from C.

Taking the constructive view, we eliminate all but 46 of these possible families
of extensions. We allow only 15 of the subsets from B, all of the 15 subsets from C,
8 families having one element from B and one element from C, 4 families having one
element from B and two elements from C, and 4 families having two elements from
B and one element from C. We should note that any default theory whose family of
extensions is a singleton is a monotone theory

The resulting mathematics remains interesting as for any of the 46 possible sets
of extensions we can find a default theory with exactly those extensions. Specifically,
for any acceptable family F = {Cn(7\)„ ..., Cn(T n )}, the default theory (D, 0) where

D = {= hW * i}/T t \l < i < n}

will have F as its set of extensions. For example, for the family {Cn(p),Cn(q)} we
would choose the default theory

({: ->g/p,:-np/g},0).

This default theory will have exactly the two theories Cn(p) and Cn(q) as its exten-
sions.

Now, since the intersections of families of extensions is of particular interest,
we might ask which acceptable families of extensions have intersections which yield a

113

specific theory. Knowing this, given a theory which we desire to see as the intersection
of extensions we may create the default theory whose extensions intersect to yield
that theory.

In the two atom case, intersections of families of extensions proceed as follows:

Cn(p A q), Cn(p A ->q), Cn(^p A q), and Cn(^p A -.<?) are the intersections
of only themselves. The other ten theories (aside from Cn(true)) can be seen as
intersections of the following families:

Cn(p) = f]{Cn(p)} = f]{Cn(p A q),Cn(p A -<?)},
Cfl(-V) = f|{Cn(-p)} = f]{Cn(^p A q), Cn(^p A -,<?)},

Cn(q) = f]{Cn(q)} = p\{Cn(p A q), Cn(^p A q)},

CnH) = f]{Cn(^q)} = f]{Cn(p A -y), Cn(^p A -*)},

Cn{p < — > q) = f]{Cn(p A q), Cn{^p A ^q),

Cn(p < — > ^q) = f]{Cn{p A - 9 ), Cn(-p A q)},

Cn(p Vq) = (~){Cn(p), Cn(q)} = f]{Cn(p A q), Cn{p A -.g), Cn(-* A 9)},

Cn(p V -g) = f|{Cn(p), Cn(-g)} = f|{Cn(p A q), Cn(p A -.9), Cr^-^ A -.?)},

Cn(-p V 9 ) = f|{Cn(-np), Cn(g)} = f|{Cn(p A ?), Cn(-p A q), Cn(-,p A -*)>,

Cn(^p V -.9) = f|{Cn(-.p), Cn(-g)} = (~){Cn(p A -.g), Cn(^p A g), Cn(-*> A -*)}.

So we see that any given theory may be derived as the intersection of a set of
acceptable extensions.

114

6.4 Logic Programming: Preliminary Definitions and Theorems

The stable model semantics of logic programs has been extensively studied.
Unfortunately, the set of stable models of a recursive prepositional logic program with
negation or even of a finite predicate logic program with negation can be quite be quite
complex. For example, in [MNR90], it is shown that for any recursive propositional
logic program P, there is an infinite branching recursive tree T P such that there is
an effective 1:1 degree preserving correspondence between the set of stable models
of P and the set of infinite paths through T P . In [MNR92a] it is shown that given
any infinite branching recursive tree T, there exists a recursive propositional logic
program P T such that there is an effective 1:1 correspondence between the set of
infinite paths through T and the set of stable models of P. Moreover, in [MNR92a],
it is shown that the same result holds if we replace recursive logic programs by finite
predicate logic programs with negation. These results imply that the set of stable
models of a recursive propositional logic program or a finite predicate logic program
can be extremely complex. For example, it follows from these results that there a
finite predicate logic program which has a stable model but has no stable model which
is hyperarithmetic. The main motivation for this paper was to develop conditions
on recursive logic programs P which would guarantee the existence of well behaved
stable model for P, i.e. a stable model of P which is recursive or possibly even
polynomial time.

We shall now carefully define the notion of recursive logic programs. Then we
shall define the notion of proof schemes which will lead to the definitions of locally
finite programs. Finally we shall describe the extension of the Reiter's concept of
normal default theories to recursive logic programs following Marek, Nerode and
Remmel.

A program clause is an expression of the form

C = P<-gi,...,g n ,-T 1 ,...,-.r m (6.1)

115

where p,qi, . . . ,q n ,ri,. . . ,r m are atomic formulas in some propositional language C.
A program is a set of clauses of the form (6.1). A clause C is called a Horn clause if
m = 0. We let Horn(P) denote the set of all Horn clauses of P. H P is the Herbrand
base of P, that is, the set of all atomic formulas of the language of P.

If P is a program and M C H P is a subset of the Herbrand base, define
operator T PM :V{H P ) -* V{H P ) where T PM {I) is the set of all p such that there

exists a clause C = p «- ft,. . . ,^,-iTi -*•„, in P such that ft € /,. . . ,fc € /

and {r lr ..,r m }nM = 0.

The operator T P , M is a monotonic finitizable operator, see Apt (1990), and
hence possesses a least fixpoint F PM . Given program P and M C # P , the Gelfond-
Lifschitz reduct of P is defined as follows. For every clause C of P, execute the
following operation: If some atom a belongs to M and its negation ->a appears in C,
then eliminate C altogether. In the remaining clauses that have not been eliminated
by the operation above, eliminate all the negated atoms. The resulting program P\$ L
is a Horn propositional program (possibly infinite). The program P\$ L possesses a
least Herbrand model. If that least model of P° L coincides with M, then M is called
a stable model for P. Gelfond and Lifschitz (1998) proved the every stable model of
P is a minimal model of P and that M is stable model of P iff M = F P<M .

Having characterized stable models as fixpoints of (parametrized) operators,
consider the form of elements of F PyM . A P, M- derivation of an atom p is a sequence
(Pi, • • >tP\$) such that (i) p s = p and (ii) for every i < s, either n p t «-" is a member
of P or there is a clause C = " Pi <- ft, ...,«,, -*i,. .. , -r m " such that C G P,
Qu---,q n 6 {Pii -.,ft-i} , and ri,..., r m £ M. It is easy to show that F PM is
the set of all atoms possessing a P, M-derivation. Thus M is a stable model of the
program P if and only if M consists exactly of those atoms which possess a P, M-
derivation.

116

The property that a sequence (p l5 . . . ,p s ) is a P, M-derivation of an atom p
does not depend on the whole set M but only on the intersection of M and a certain
finite set of atoms that occur in the derivation. In order that the sequence (pi,...,p s )
be a P, M-derivation of an atom p s , some atoms must be left out of the set M. Each
derivation depends on a finite number of such omitted atoms. In other words, if we
classify the atoms according to whether they are "in" or "out" of M, the property
that a sequence (pi,...,p,) is a P, M-derivation depends only on whether a finite
number of elements are out of M. The notion of a proof scheme formalizes this idea.

A (P-)proof scheme for an atom p is a sequence S = {(p u Ci,Ui)) s i=l of triples
such that for each triple (p t , Q, U { ), p { E H P , d e P is a clause with the head p % and
Ui is a finite subset of H P . Such sequence S is a proof scheme for p if (1) p, = p, and
for every i (2) Q = p % <- q h ... ,q n , ir u ..., -.r m , where {q l ,...,q n } C {pi t ...p i - 1 }
and Ui =Ui-i U{r u ...,r m }.

We call p the conclusion of 5, written p = cln(S), and the set U, the support of
5, written supp(S). We say that a subset M C H P admits a proof scheme S =
((ft,C il C/ t ))LiifMn[/ s = 0.

The following proposition due to Marek, Nerode,and Remmel in [MNR90]
characterizes stable models in terms of the existence of proof schemes.

Proposition 6.4.93 Let M C H P . Then M is a stable model of P if and only if

(1) for every pe M, there is a proof scheme S for p such that M admits S, and

(2) for every pi M, there is no proof scheme S for p such that M admits S.

As stated in the introduction, restrictions on the number of proof schemes
greatly reduce the possible complexity of the set of stable models of a recursive logic
program P. But how many derivation schemes for an atom p can there be? If we
allow P to be infinite, then it is easy to construct an example with infinitely many
derivations of a single atom. Moreover given two proof schemes, one can insert one

117

into the other (increasing appropriately the sets Ui in this process, with obvious
restrictions). Thus various clauses C { may be immaterial to the purpose of deriving
p. This leads us to introduce a natural relation -< on proof schemes using a well-
known device from proof theory. Namely, we define Si -< S 2 if Si , S 2 have the same
conclusion and if every clause appearing in Si also appears in S 2 . Then a minimal
proof scheme for p is defined to be a proof scheme S for p such that whenever S' is a
proof scheme for p and S' -< S, then S -< S'. Note that -< is reflexive and transitive,
but -< is not antisymmetric. However it is wellfounded. That is, given any proof
scheme S , there is an S' such that S' -< S and for every S", if S" -< S', then S' -< S".
Moreover, the associated equivalence relation, S = S', defined by S -< S' and S' -< S,
has finite equivalence classes.

Example 6.4.32 Let Pi be the following program:
C x : p(0)<-^q{Y).

C 2 : nat(0) <- .

C 3 : nat(s(X)) <- nat{X).

Then atom p(0) possesses infinitely many minimal proof schemes. For instance,

each one-element sequence Si = ((p(0), Ci0j, {s^O)})) where 6 t is the operation of

substituting s l (0) for Y , is a minimal proof scheme for p(0). However if program P 2

is the result of replacing clause C\ by

C[: q ( s (Y))^^q(Y),

each atom possesses only finitely many minimal proof schemes.

We shall call a program P locally finite if for every atom p, there are only
finitely many minimal proof schemes with conclusion p. If P is locally finite and
p G H P , we let D p denote the union of all supports of minimal proof schemes of p.
Clearly for any M C H P , the question of whether p has a P, M-derivation depends
only onMflDp. This implies that if P is locally finite, when we attempt to construct

118

a subset M C H P which is a stable model for P, we can apply a straightforward
(although still infinite) tree construction to produce such an M, if such an M exists
at all.

Next, we need to make the notion of a recursive program precise. First, assume
that we have a Godel numbering of the elements of the Herbrand base H P . Thus,
we can think of each element of the Herbrand base as a natural number. U p € H P ,
write c(p) for the code or Godel number of p. Let u = {0, 1,2,.. .}. Assume [,]
is a fixed recursive pairing function which maps w x uj onto u> which has recursive
projection functions *, and 7r 2 , defined by ni([x u x 2 }) = x { for all x 1 and x 2 and i e
{1,2}. Code a finite sequence (x u . . . , x n ) for n > 3 by the usual inductive definition
[x 1 ,...,x n } = [xi, [x 2 , . . . ,x n }}. Next, code finite subsets of a; via "canonical indices".
The canonical index of the empty set, 0, is the number and the canonical index of a
nonempty set {x , ..., x n }, where x Q < . . . < x n , is ££_, 2 X * . Let F k denote the finite
set whose canonical index is k. Once finite sets and sequences of natural numbers
have been coded, we can code more complex objects such as clauses, proof schemes,
etc. as follows. Let the code c(C) of a clause C = p <r- q u . . . , q n , -r u -rr m be
[c(p), k, I], where k is the canonical index of the finite set {c(ft), . . . , c(q n )}, and I is
the canonical index of the finite set {c( ri ), ..., c(r m )}. Similarly, let the code c(S) of a
proof scheme S = ({p u Q, f/,))^ be [s, [[c( Pl ), c(d), c{U x )\, . . . , [c(p s ), c(C s ), c(U s )}]},
where for each i, c(Ui) is the canonical index of the finite set of codes of the elements of
Ui. The first coordinate of the code of a proof scheme is the length of the proof scheme.
Once we have defined the codes of proof schemes then for locally finite programs we
can define the code of the set D p consisting of the union of the supports of all minimal
proof schemes for P. Finally we code recursive sets as natural numbers. Let <f> , &, . . .
be an effective list of all partial recursive functions where <t> e is the partial recursive
function computed by the e-th Turing machine. By definition, a (recursive) index
of a recursive set R is an e such that <j) e is the characteristic function of R. Call

119

a program P recursive if the set of codes of the Herbrand universe Hp is recursive
and the set of codes of the clauses of the program P is recursive. If P is a recursive
program, then by an index of P we mean the code of a pair [u, p] where u is an index
of the recursive set of all codes of elements in Hp and p is an index of the recursive
set of the codes of all clauses in P.

For the rest of this disscusion we shall identify an object with its code as
described above. This means that we shall think of the Herbrand universe of a
program, and the program itself, as subsets of u and clauses, and proof schemes as
elements of a;.

We also need to define various types of recursive trees and n° classes. Let uo <u
be the set of all finite sequences from u> and let 2 <u ' be the set of all finite sequences
of O's and l's. Given a = (c*i, . . . , a n ) and (3 - (ft, . . . , k ) in u; <w , write a Q (3 if
a is initial segment of ft i.e. , if n < k and ojj = fa for i < n. In this paper, we
identify each finite sequence a = (cti, . . . , a n ) with its code c(a) = [n, [c*i, . . . , a n ]] in
Uf. Let be the code of the empty sequence 0. When we say that a set S C u <UJ is
recursive, recursively enumerable, etc., what we mean is that the set {c(a):a 6 5}
is recursive, recursively enumerable, etc. Define a tree T to be a nonempty subset
of lu <uj such that T is closed under initial segments. Call a function f:u — > u> an
infinite path through T provided that for all n, (/(0), . . . , /(n)> € T. Let [T] be the
set of all infinite paths through T. Call a set A of functions a n^-class if there exists
a recursive predicate R such that A = {f:u -> u :Vn(R(n, [/(0), . . . , /(n)])}. Call a
Til-class A recursively bounded if there exists a recursive function g: u — > u such
that V/ G A\fn(f(n) < g(n)). It is not difficult to see that if A is a rij-class, then
A = [T] for some recursive tree T C uj <u} . A tree T C uj <u) is highly recursive if
T is a recursive finitely branching tree and also there is a recursive procedure which,
applied toa= (oi, ... , a n ) in T, produces a canonical index of the set of immediate
successors of a in T. Then if C is a recursively bounded flj-class, it is easy to show

120

that C = [T] for some highly recursive tree T C u <UJ , see CR98. For any set A C to,
the set A' = {e:<j>^{e) is defined} is called the jump of A, let 0' denote the jump
of the empty set 0. We write A <t B if A is Turing reducible to B and A = T B if
A < T B and B <t A. A function / : C -» D is said to be degree-preserving if
;4 =r /(4) for all A EC.

Even if F is a locally finite program, there is no guarantee that the global be-
havior of the function p t-¥ D p , mapping u> into u>, has any sort of effective properties.
Thus we are led to define the following.

We say that a locally finite recursive program F possesses a recursive proof
structure (rps) if (1) F is locally finite, and (2) the function p i-> D p is recursive. A
locally finite recursive program with an rps is called an rps program.

6.5 Equivalence of Logic Programming to Nonmonotonic Rule Systems

The equivalence of Logic Programming to Nonmonotonic Rule Systems has
been shown by Cenzer, Marek, Nerode, Remmel and others. Here, we give the
equivalence as shown by Cenzer and Remmel [CR99]. We shall encode now the stable
models of logic programs as extensions of suitably chosen nonmonotonic rule systems.
The universe of all our systems, U, will be the Herbrand base of the program. Next,
to every general propositional clause C,

C =p<r- q u ...,q m ,^r u ...,-^r n

assign the rule

9i,...,9m :ri,...,r„

re = •

P

Now, given the program F, define

Np = {r c : C 6 ground{P)}
We then have the following result

121

Theorem 6.5.94 Let P be a general logic program. Let M be a subset of the tier-
brand base of P. Then M is a stable model of P is and only if M is an extension of
the nonmonotonic rule system (U,Np).

Theorem 6.5.94 allows us to obtain results concerning stable models of logic programs
from theorems about extensions of nonmonotonic rule systems.

These theorems and other similar results obtained in mnr90a,mnr90b indicate
why nonmonotonic rule systems are useful in the investigations of various nonmono-
tonic logical formalisms. The results, both positive and, to some extent, negative,
on nonmonotonic rule systems provide us with corresponding results for all these
formalisms.

6.6 Previous Results Through the Eves of Logic Programming

In this section, we shall introduce the key notion of a locally determined logic
program P. The informal notion of a locally determined logic program P is one in
which the existence of a proof scheme for an atom a, (or the lack of existence thereof)
can be determined by examining only clauses or proof schemes involving some initial
segment of the Herbrand base of P. More formally, fix some countable logic program
P and some listing a Q , Oi, . . . of the atoms of Herbrand base of P without repetitions.
(We shall make the convention that if P is recursive logic program, then there is some
recursive function h such that h(i) = a { .) Then given a proof scheme or a clause </>,
we write max(<f)) for the max({i : a { occurs in 0}). We shall write P n for the set of
all clauses C e P such that max(C) < n and let A n = {a , . . . ,o»}.

Definition 6.6.95 We shall say that n is a level of P if for all S C {a , . . . ,a„}
and all i < n, whenever there exists a proof scheme ip such that cln{i>) = a t and
supp(tp)llS = 0, then there exists a proof scheme <p such that cln(4>) = a it supp{(j)) n
S = and max(<f>) < n. Note that by definition, the Herbrand base H Pn of P n is
contained in A„

122

Theorem 6.6.96 Suppose that n is a level of P and E is a stable model of P. Then
E n = E fl {a , . . . , a n } is a stable model of P n .

Proof: If E is a stable model of P, then for any a* G E n , there is a proof scheme ip
such that cln(ip) — aj and supp(ip) f) E = 0. Thus, in particular, supp(ip) n E n =
so that since n is a level, there exists a proof scheme <j> such that max((j)) < n,
cln(<(>) = a,, and supp{(j>) C\E n = \$. Thus <£ is a proof scheme of P n and E n admits <f>.
Vice versa, if i < n and a* is not in £ n , then there can be no proof scheme <fi of P n
such that cln((j)) = a*, max(4>) < n, and supp((j)) fl £„ = since this would violate
the fact that E is a stable model of of P. Thus E n is a extension of P n .

Definition 6.6.97 1. We shall say that a logic program P is locally determined
if P is countable and there are infinitely many n such that n is a level of P.

2. Suppose that P is a recursive logic program. Then we say that P is effectively
locally determined if P is locally determined and there is a recursive function
f such that for all i, f(i) > i and f(i) is a level of P.

Notation: If P is locally determined, we let lev(P) — {n : n is a level of P}.

In [MNR90 and MNR92a], Marek, Nerode, and Remmel showed that the
problem of finding a stable model of a locally finite recursive logic program can be
reduced to finding an infinite path through a finitely branching recursive tree and
the problem of finding a stable model of a rsp logic program can be reduced to
finding an infinite path through a highly recursive tree. A locally determined logic
program is not always locally finite since it is possible that a given atom has infinitely
many proof schemes which involves arbitrarily large atoms. Vice versa, it is possible
to give examples of locally finite logic programs which are not locally determined.
Nevertheless, we shall see that we get similar results to those of Marek, Nerode,
and Remmel for locally determined and effectively locally determined recursive logic
programs.

123
Theorem 6.6.98 Let P be a recursive logic program.

1. If P is locally determined, then there is a recursive finitely branching tree T
and a 1:1 degree preserving correspondence between the set set of stable models
£ of P and [T] and

2. If P is effectively locally determined, then there is a highly recursive finitely
branching tree T and a 1:1 degree preserving correspondence between the set set
of stable models £ of P and [T] .

Proof:

There is no loss in generality in assuming that H p = u and that a = 0, ai =

1, Next observe that for each n, P n has only finitely many minimal proof schemes

so that we can effectively list all minimal proof schemes </> < fa < . . . in such a way

that

1. if max((f) k ) = i and max{(j)i) = j and i < j, then k < I. (This says that if i < j,
then the proof schemes whose max is i come before those proof schemes whose
max is j.)

2. ifmax((j) k ) = max(<t>{) = i, k < I if and only if c(<f> k ) < c{<j>i) where c{4>) denotes
the index assigned to a proof scheme <j> under our effective Godel numbering of
the proof schemes.

We shall encode a stable model M of P by a path * M = (xfc,*!,. . .) through
the complete ^-branching tree lj <uj as follows. First, for all i > 0, 7r 2i = xm(0- That
is, at the stage 2z we encode the information if i belongs to M. Next, if 7r 2i = then
7T2i+i = 0. But if 7r 2i = 1 so that i 6 M, then 7r 2l+ i = q M {i) where q M (i) is the least q
such cln((j) Q ) = i and supp{<j) q ) n M = 0. Thus </> 9mW is the least proof scheme which
shows that i € Fp <M .

124

Clearly M < T tt m since it is enough to look at the values of tt m at the even
levels to read off M. Now given an M-oracle, it should be clear that for each i £ M,
we can use an M-oracle to find 9m(0 effectively. This means that ty m < t M. Thus
the correspondence M h-> tx m is an effective degree-preserving correspondence. It is
trivially one-to-one.

Next we construct a recursive tree T C u u such that

[T] = {tte : E is a stable model of P}.

Let L k = max({i : max((j)i) < k}). It is easy to see that since P is a locally
determined recursive logic program, we can effectively calculate L k from A;. We have
to say which finite sequences belong to our tree T. To this end, given a sequence
a = (cr(0), . . . , a(k)) € w <UJ set l„ = {i : 2z < k A a(2i) = 1} and O a = {i : 2i <
k A a(2i) = 0}. Now we define T by putting a into T if and only if the following four
conditions are met:

(a) Vi(2? + 1 < k A a(2i) = => a(2i + 1) = 0).

(b) Vi(2? + 1 < k A cr(2z) =la> cr(2z + 1) = q where </>, is a minimal proof scheme
such that cln(<j) q ) = i and supp(<f) q ) n I a = 0).

(c) Vz(2z + 1 < k A cr(2i) = 1 => there is no c € L LJt/2 j such that dn(^ e ) = »,
supp((j) c ) C CT and c < <r(2i + 1)).

(here [-J is the so-called number-theoretic "floor" function).

(d) Vi(2i < k A tr(2i) = => there is no c e L Lfc/2 j such that c/n(c/> c ) = i and
supp((j) c ) C O ff ).

It is immediate that if a G T and r Qa, then r G T. Moreover it is clear from
the definition that T is a recursive subset of u <w '. Thus T is a recursive tree. Also,
it is easy to see that our definitions ensure that, for any stable E of P, the sequence
n B is a branch through T, that is, ir E G [T].

We shall show now that every infinite branch through T is of the form -n E for
a suitably chosen stable model E. To this end assume that /3 - (/?(0), /?(!), . . .) is

125

an infinite branch through T. There is only one candidate for E, namely E = {i ;
/3(2i) = 1}. Two items have to be checked, namely, (I) E\$ is an stable model of P
and (II) 7r {Eg) = p.

To prove (I), first observe that if i e E , then a(2i) = 1 and o(2i + 1) = q
where (j) q is a proof scheme such that cln((j) q ) = i. Moreover condition (c) and the fact
that a n = (P , Pi, . . . , p n ) e T for all n > 2% + 1 easily imply that supp((p q ) n I„ n -
for all such n and hence supp((j) q ) D Ep = 0. In addition, condition (c) ensures that
<?!> g is the least proof scheme with this property. Similarly if i £ Ep, then condition
(d) and the fact that a n - {Po,Pi, . . . , P n ) € T for all n > 2i + 1 easily imply that
there can be no proof scheme (f> q with cln(<f) q ) = i and supp((j) q ) f\ Ep = %. It then
follows from Proposition 6.4.93 that Eg is a stable model of P and that 7r (£y3 ) =

The key fact needed to establish the branching properties of T is that for
any sequence a € T and any i, either a(2i) = a(2i + 1) = or a(2i) = 1 and
a(2i + 1) codes a minimal proof scheme for i. We just note that when a proof scheme
tp = a(2i + 1) does not correspond to a path tt e , then there will be some k such that
a has no extension in T of length k. This will happen once we either find a smaller
code for a proof scheme or we find some u > i in the support of rf> such that all
possible extensions r of a have t(2u) = 1.

We claim that T is always finitely branching and that if P is effectively locally
determined, then T is highly recursive. Clearly the only case of interest is when
2i + 1 < k and a(2i) = 1. In this case we will let a(2i + 1) = c where cln(<j) c ) = i and
supp((j) c ) n 7 a = and there is no a < c such that cln((j) a ) = i and supp((f) a ) (1 I a = 0.
Now suppose that p is level and i < p. Then by definition, there must be a minimal
proof scheme ip such that max(ip) < p, cln(tp) = i, and supp(ip) D I a = 0. Thus
tp = (j) q for some q < L p . It follows that c < L p where p is the least level greater
than or equal to i. Thus T is always finitely branching. Now if P is effectively locally

126

determined and this is witnessed by the recursive function /, then it will always be
the case that c < Lj^) so that T will be highly recursive.

Corollary 6.6.99 Suppose that P is a countable locally determined logic program
such that there are infinitely many n such that P n has a stable model E n . Then P
has a stable model.

Proof: Now consider the tree T constructed for P as in Theorem 6.6.98. Here we
again can construct our sequence of minimal proof schemes O , 0i, ■ • • recursive in P
just as we did in Theorem 6.6.98. However, we can only conclude that T is recursive
in P. Nevertheless, we are guaranteed that T is finitely branching which is all we
need for our argument.

Now fix some level n and consider some m>n such that P m has an extension
E m . Then by the exact same argument as in Theorem 6.6.96, E n = E m n {0, . . . , n}
will be a stable model of P n . Now consider the node a En = (ct(0), . . . , a(2n + 1)) such
that

1. a{2i) = l\fieE n and u(2i) = if i <\$. E n ,

2. <7(2z + l) = 0if(7(2t) = 0, and

3. cr(2i + 1) = c where c is least number such that max((f> c ) < n, cln{<j) c ) = i, and
supp((f) c )r\E n = 0. (Note it follows from our ordering of minimal proof schemes
that <j> c is the least proof scheme <f> such that dn{<j>) = i and supp{<f>)C\E n - 0.)

It is easy to check that our construction of T ensures that a G T. It follows that T
is infinite finitely branching tree and hence T has infinite path -k by Konig's Lemma.
Our proof of Theorem 6.6.98 shows that E„ is an stable model of P.

We now give two conditions which ensure the existence of recursive stable
models.

127

Definition 6.6.100 1. Let P be a locally determined logic program and lev(P) =
{Iq < li < . . .} . Then we say say that P has the level extension property if
for all k, whenever E k is a stable model of Pi k , there exists a stable model E k+i
ofP ik+l such that E k+i n {a , ...,ai k } = E k .

2. A level n of P is a strong level of P if for any levels m < n of P and any stable
model E m of P m , if there is no stable model E of P with Er\{a , . . . , a m } = E m ,
then there is no stable model E n of P n with E n D {a , . . . , a m } = E m .

3. P has effectively strong levels if P has infinitely many strong levels and
there is a computable function f such that for each i, i < f(i) and f(i) is a
strong level.

In general, it is not easy to ensure that a locally determined logic program P
has the level extension property. However, there are many natural examples where
this condition is satisfied. One way to generate such programs is to consider the
following result from recursive combinatorics. Bean [be76] showed that there there
exists highly recursive connected graphs which are 3-colorable but not recursively
/c-colorable for any k. However Bean also showed that every infinite connected k-
colorable highly recursive graph G is recursively 2/c-colorable. Here a graph G =
(V, E) is highly recursive if the edge set V is a recursive subset of w, the set of codes of
the sets {x, y} G E is recursive, G is locally finite, i.e. the degree of any vertex v e V
is finite, and there is an effective procedure which given any vertex v eV produces a
code of N(x) = {y £V : {x,y} e E}. A recursive 2/c-coloring of G can be produced
as follows. Given any set W C V, let N(W) = {y G V - W : (3x e W)({x,y} e
E}. Then given any x 6 V, define an effective increasing sequence of finite sets
= A , Ai,Ai,... where Ai = N(x) U {x} and for all k > 1, A k = A k _ x U N{A k _ x ).
It is easy to see that there can be no edge from an element of A k to an element of
A k+ 2 - Ak+v Since G is A;-colorable, the induced graph determined by A { - A % _ x is

128

fc-colorable for alii > 1. We then defined a recursive 2/c-coloring of G as follows.

(Step 1) Find a coloring of A\ using colors {1, ... k},

(Step 2) Find a coloring of A 2 — Ai using colors {k + 1, . . . , 2k},

(Step 3) Find a coloring of A 3 - A - 2 using colors {1, . . . fc},

(Step 4) Find a coloring A± — A 3 using colors {k + 1 2A;}, etc.

One can easily write a logic program to implement this procedure and it will naturally
be effectively locally finite and have the level extension property.

Theorem 6.6.101 1. Suppose that P is an effectively locally determined recursive
logic program with the level extension property. Then for every level n and
stable model E n of P n , there is a recursive stable model of E of P such that
E(l{a ,...,a n } = E n .

2. Suppose that P is a recursive logic program with effectively strong levels. Then
for every level n and stable model E n of P n , if there is a stable model E of P
with E D {ao, . . . ,a n } = E n , then there is a recursive stable model of E of P
such that E n {a , . . . , a n } = E n .

Proof: For (1), fix a level n of P and a stable model E n of P. Suppose that /
is the function which witnesses the fact P is effectively locally determined. Then
let 60,61,62, ■•• be the sequence n,/(n),/(/(n)), . . .. It is easy to see that our
level extension property implies that we can effectively construct a sequence of sets
E bo ,E bl ,E b2 ,. . . such that (1) E bo = E n , (2) for all j > 0, E bj is a stable model of P b ,
and (3) for all j > 0, E bj+1 fl {a , . . . , a bj } = E bj . Now consider tree T and the nodes
a Eb as constructed in Corollary 6.6.99. It is easy to check that for all i, a Eb G T
and that a EbQ C a Ebi E a Eb2 E • • •• It follows that there is a unique path /3 in [T]
which extends all a Eb _ and that E = \J i>0 a Eb , is a stable model of P. Moreover E
is recursive because to decide if a, 6 E , one need only find k such that b k > j, in
which case, a ; e E <=> a, G CT£ 6 .

129

For (2), assume that / is the recursive function which witnesses the fact that P
has strong levels and let 6 , h, h, ■ ■ ■ be defined as above. We claim that the property
of strong levels once again lets us construct an effective sequence E bo ,E bi ,E b2 ,... such
that (1) E bo = E n , (2) for all j > 0, E bj is a stable model of P bj , and (3) for all j > 0,
E bj+l n {a Q ,...,a bj } = E bj . That is, suppose that we have constructed E bk such
that there exists a stable model S of P such that S D {a , . . . ,a bk } = E bk . Now
consider the strong level b k+2 . Our definition of strong level ensures that there must
be some stable model F k+2 of P h+2 such that F k+2 D {a , . . . , a bk } = E bk . Then let
Eb k+1 = Fk+2 n {°o, • ■ • , a b k+i }- The argument in Theorem 6.6.96 shows that E bk+l is
a stable model of P bk+l . Moreover, since b k+2 is a strong level, there must be a stable
model 5' of P such that E bk+l = S' D {a , . . . , a bk+1 } since otherwise, there can be no
extension F of P bk+2 such that E bk+1 = F n {a , . . . , a bk+1 }. This given, we can then
construct our desired recursive stable model E exactly as in (1).

We now consider another, more direct approach to producing a recursive stable
model.

Definition 6.6.102 We say that a recursive logic program P has witnesses with
effective delay if there is a recursive function f such that for all n, f(n) > n
and whenever there is a set S C {a Q ,...,a n } such that there is a stable model E
of P with E fi {a , ...,a n } = Sbut there is no stable model F of U such that F n
{ao, . . . , a n , a n+ i} = S, then either

(i) there is a proof scheme <f> with max(<f>) < n + 1 such that cln(cj)) = a n+l and
supp{(j)) C {a ,...,a n } - S, or

(ii) for all sets T included in {a n+2 , . . . ,a /(n) }, there is a proof scheme ip T with
max(ip) < f(n) such that supp(xjj T ) C {a , . . . , a /H } - (T U S) and
dn{ipr) e {a , . . . , o /(n) } -(Tu5).

130

Note that in case (i), the proof scheme 4> witnesses that we must have a n+1 in any
stable model E such that E n {a , . . . ,a n } = S. In case (ii), the proof schemes <j) T
show that we can not have a stable model E of P such that E D {a , . • • , a/ (n) } =
S U T so that we are again forced to have a„ +1 in any stable model E such that
£n{a ,...,a n } = S.

Theorem 6.6.103 Suppose that P is a recursive logic program which has witnesses
with effective delay and has at least one stable model. Then the lexicographically least
stable model E of P is recursive.

Proof: We can construct the lexicographically least stable model E of P by
induction as follows.

Suppose that for any given n we have constructed E n = E n {a , . . . ,a„}. Then
E n +\ = E n unless either

(i) there is a proof scheme <j) of level n such that supp(<j)) C {a , ...,a n } - E n and
cln((j)) = a n+1 or

(ii) for all sets T included in {a n+2 , . . . ,a /(n) }, there is a proof scheme ip T with
maxifo) < f(n) such that supper) C {a , . . . , a /(n) } - (T U .£„) and
cin(^r) 6 {a , . . . , a /{n) } - (TU ^„).

in which case £ n+1 = £ n U {a n+1 }. Note that since there are only finitely many
minimal proof schemes <j) with max(4>) < k for any given k, we can check conditions
(i) and (ii) effectively. Since there is a stable model, it is easy to see that our
definitions insure that E n is always contained in the lexicographically least stable
model of P. Thus E = \J n E n is recursive.

By putting suitable effective bounds on the effective levels and/or the witnesses
with effective delay, one can readily come up with conditions that force P to have
exponential time, NP, or P-time stable models.

CHAPTER 7
FUTURE DIRECTIONS

There are many avenues of research yet to be explored in Nonmonotonic Rule
Systems and other forms of Nonmonotonic Logic, including Default Logic and Logic
Programming. As far as common derivations are concerned, we have only scratched
the surface. There are three extensions of the research which should be followed,
namely further work in the complexity of the common derivation, applications of the
current results, and combining Nonmonotonic Rule Systems with Fuzzy Logic and/or
Multiple- Valued Logic.

The complexity of the common derivations have been touched upon in this
dissertation, but have been by no means exhaustively studied. In addition, we might
consider a multitude of other types of complexity aside from those mentioned here.

Some of the original motivation for the results in this dissertation was to create
the mathematics necessary to apply Nonmonotonic Logic to such areas as advertising,
political campaigning, and other social sciences. The results, at this point, I believe
are sufficient to begin the creation of models for use in these fields. As a matter of
practicality, these models should be included in the future directions of this research.

To combine Nonmonotonic Rule Systems with Fuzzy Logic and/or Multiple-
Valued Logic may seem a strange or unrelated road to follow from the present work.
However, Nonmonotonic Logic was initially established with the intention of formal-
izing human reasoning. It is optimistic in the least to suppose that every human
belief and thought carries a black-and-white value of true or false. Human reason-
ing often treads a grey area somewhere in between. Thus, Fuzzy Logic or at least
Multiple- Valued logic is called for. In Nonmonotonic Rule Systems we may use any

131

132

underlying logical system to govern U. Interesting results may first be found in the
comparitively less complicated world of Multiple- Valued Logic, begining with three
values and generalizing results to n-valued logic. From there, choosing Fuzzy Logic
to be the "logic of choice" for the Nonmonotonic Rule System < U,N > would be
the natural step to take.

This is by no means a complete list of possible future directions. Nonmono-
tonic Logic, a vigorous and rapidly growing area of research, is still in its infancy.
There are many questions still to be answered, and each new result brings its weight
in new roads to follow.

REFERENCES

[AN78] H. Andreka, and I. Nemeti. The Generalized completeness of horn predi-
cate logic as a programming language. Acta Cybernetica 4(1978): 3-10.

[Apt90] K.R. Apt. Logic programming. Handbook of theoretical computer science,
J. van Leeuven, (ed.). Cambridge, MA: MIT Press, 1990.

[AB90] K.R. Apt, and H.A. Blair. Arithmetical classification of perfect models
of stratified programs. Fundamenta Informaticae 13 (1990): 1-17.

[B90] J. Banks. A model of electoral competition with incomplete information.

Journal of Economic Theory 50 (1990): 309-325.

[Be76] D.R. Bean. Effective coloration. Journal of Symbolic Logic 41 (1976): 469-
480.

[BDK97] G. Brewka, J. Dix, and K. Konolige. Nonmonotonic reasoning: An
overview. Stanford, CA: CSLI, 1997.

[BG97] G. Brewka, and G. Gottlob. 1997. Well-founded semantics for default
logic. Fundamenta Informaticae 31(3-4) (1997): 221-236.

[CMMT95] P. Cholewihski, W. Marek, A. Mikitiuk, and M. Trusczcyhski. Exper-
imenting with nonmonotonic reasoning. Proceedings of the Twelfth In-
ternational Conference on Logic Programming 267-281. Cambridge, MA:
MIT Press, 1995.

[CMT96] P. Cholewihski, W. Marek, and M. Trusczcyhski. Default reasoning sys-
tem DeRes. Proceedings of the International Conference on Principles
of Knowledge Representation and Reasoning, L. Carlucci Aiello, J. Doyle
and S. Shapiro (eds.). KR96 (1996): 518-528.

[CR98] D. Cenzer and J.B. Remmel. ITj'-classes in mathematics. Handbook of
recursive mathematics: Volume 2, Yu. L. Ershov, S.S. Goncharov, A.
Nerode, and J.B. Remmel (eds.). Studies in Logic and the Foundations
of Mathematics, 139: 623-822. Amsterdam: Elsevier, 1998.

[CR99] D. Cenzer, and J. Remmel. Complexity, decidability, and completness.
Preprint.

[CRV99] D. Cenzer, J.B. Remmel, and A.K.C.S. Vanderbilt. Locally determined
logic programs. Proceedings of the 5th international Conference on
Logic Programming and Nonmonotonic Reasoning (LPNMR99). Berlin:
Springer- Verlag, 34-49. 1999.

[Do79] J. Doyle A truth maintenance system. Artificial Intelligence 12 (1979):
231-272.

133

134

[DM82] J. Doyle and D. McDermott. Nonmonotonic logic II: nonmonotonic modal
theories. Journal of the A.C.M. 29 (1982): 33-57.

[EG93] T. Eiter, and G. Gottlob. Complexity results for disjunctive logic pro-
gramming and application to nonmonotonic logics. Proceedings of the
Tenth International Logic Programming Symposium (ILPS-93). Cam-
bridge, MA: MIT Press, 1993, 266-278. "

[EMTT96] J. Engelfriet, W. Marek, J. Treur, and M. Truszczyriski. Infmitary de-
fault logic for specification of nonmonotonic reasoning. Logics in Artificial
Intelligence, Proceedings European Workshop on Logics in Artificial In-
telligence, JELIA'96. Lecture notes in Artificial Intelligence 1126 (1996):
224-236.

[GL88] M. Gelfond and V. Lifschitz. The stable semantics for logic programs.
Proceedings of the 5th International Symposium on Logic Programming.
Cambridge, MA: MIT Press, 1988, 1070-1080.

[Gol94] C. Goldburg. The accuracy of game theory predictions for political be-
havior: cumulative voting in illinois revisited. Journal of Politics 56(4)
(1994): 885-900.

[Got92] G. Gottlob. Complexity results for nonmonotonic logics. Journal of Logic
and Computation 2(3) (1992): 397-425.

[Got93] G. Gottlob. Recent complexity results in logic programming and non-
monotonic reasoning and why they matter. Proceedings of the Second
International Workshop on Logic Programming and Nonmonotonic Rea-
soning LPNMR'93. Cambridge, MA: MIT Press, 1993, 265.

[Got95] G. Gottlob. The complexity of default reasoning under the stationary
fixed point semantics. Information and Computation 121(1) (1995): 81-

- * _ .

[GM94] G. Gottlob, and Z. Mingyi. Cumulative default logic: finite characteriza-
tion, algorithims and complexity. Artificial Intelligence 69 (1994): 329-

[MNR90] W. Marek, A. Nerode, and J. Remmel. A theory of nonmonotonic rule
systems I. Annals of Mathematics and Artificial Intelliqence 1 (1990):
241-273.

[MNR92a] W. Marek, A. Nerode, and J. Remmel. A theory of nonmonotonic rule
systems II. Annals of Mathematics and Artificial Intelliqence 51 (1992):
229-263.

[MNR92b] W. Marek, A. Nerode, and J. Remmel. How complicated is the set of
stabile models of a recursive logic program? Annals of Mathematics and
Artificial Intelligence 56 (1992): 119-135.

[MNR92c] W. Marek, A. Nerode, J. Remmel. The stable models of a predicate
logic program. Journal of Logic Programming 21(3) (1992): 129-154.

[MNR94] W. Marek, A. Nerode, and J. Remmel. A context for belief revision:
forward chaining-normal nonmonotonic rule systems. Annals of Pure and
Applied Logic 67(1-3) (1994): 269-323.

135

[MNR94a] W. Marek, A. Nerode, and J. B. Remmel. The stable models of predicate
logic programs. Journal of Logic Programming 21 (1994): 129-154.

[MNR95] W. Marek, A. Nerode, and J. Remmel. Complexity of normal default
logic and related modes of nonmonotonic reasoning. Proceedings of the
10th Annual IEEE Symposium on Logic in Computer Science Cambridge,
MA: MIT Press (1995): 178-187.

[MNR97a] W. Marek, A. Nerode, and J. Remmel. Complexity of recursive normal
default logic. Fundamenta Informaticae 32(2) (1997): 139-148.

[MNR97b] W. Marek, A. Nerode, and J. Remmel. Nonmonotonic rule systems
with recursive sets of restraints. Archivfur Mathematische Logik36 (1997):
339-384.

[MNR99a] W. Marek, A. Nerode, and J. Remmel. Logic programs, well-orderings
and forward chaining. Annals of Pure and Applied Logic 96 (1999): 231-
276.

[MNR99b] W. Marek, A. Nerode, and J. Remmel. Forward chaining-normal non-
monotonic rule systems. Preprint. 1999.

[MS89] W. Marek, and V. S. Subrahmanian. Relationship between logic program
semantics and nonmonotonic reasoning. Proceedings of the Sixth Inter-
national Conference on Logic Programming. Cambridge, MA: MIT Press
(1989), 417-419.

[MST91] W. Marek, G. Schwartz, and M. Truszczyhski. Ranges of strong nonmono-
tonic logics. Nonmonotonic Logic and Inductive Logic. Springer Lecture
Notes in Computer Science 543 (1991): 85-99.

[MT89a] W. Marek and M. Truszczyhski. Stable semantics for logic programs
and default theories. Proceedings of North American Conference on Logic
Programming. Cambridge, MA: MIT Press (1989), 243-256.

[MT89b] W. Marek and M. Truszczyhski. Relating autoepistemic and default log-
ics. Proceedings of the International Conference on Knowledge Represen-
tation KR89, San Fransisco, CA: Morgan-Kaufman (1989), 276-288.

[MT90] W. Marek and M. Truszczyhski. Modal logic for default reasoning. Annals
of Mathematics and Artificial Intelligence 1 (1990): 275-302.

[MT92] W. Marek and M. Truszczyhski. More on modal aspects of default logic.
Fundamenta Informaticae 17(1-2) (1992): 99-116.

[MT93a] W. Marek and M. Truszczyhski. Nonmonotonic logics; context-dependent
reasoning. Berlin: Springer- Verlag, 1993.

[MT93b] W. Marek and M. Truszczyhski. Normal form results for default logic,
nonmonotonic and inductive logic II. Springer Lecture Notes in Computer
Science, G. Brewka, K. P. Jandtke, and P. H. Schmitt (eds.), 659 (1993):
270-283.

[MTT97] W. Marek, J. Treur, and M. Truszczyhski. Representation theory for
default logic. Annals of Mathematics and Artificial Intelligence 21 (1997):
343-358.

136

[MZ90] W. Marek, and A. Zhang. On the classification and existence of structures
in default logic. Fundamenta Informaticae 13(4) (1990): 485-499.

[Rei80] R. Reiter. A logic for default reasoning. Artificial Intelligence, 13 (1980):
81-132.

BIOGRAPHICAL SKETCH

Amy K. C. S. Vanderbilt was born Amy Kathryn Colleen Sartain on February
17, 1975, in Fayetteville, Tennessee. She moved to Orlando, Florida, in 1978 where
she lived until 1992 and attended Lake Highland Preparatory School. In 1992 she
moved to Gainesville, Florida, to attend the Univeristy of Florida earning a bachelor's
degree in mathematics in 1995, a master's degree in pure mathematics in 1997 and a
doctorate in mathematics, specializing in logic, in May of 2000.

137

I certify that I have read this study and that in my opinion it conforms to
acceptable standards of scholarly presentation and is fully adequate, in scope and quality,
as a dissertation for the degree of Doctor of Philosophy.

^1_£

j£

?*g£4L

Douglas A. Cenzer, Chairman
Professor of Mathematics

I certify that I have read this study and that in my opinion it conforms to
acceptable standards of scholarly presentation and is fully adequate, in scope and quality,
as a dissertation for the degree of Doctor of Philosophy.

■^7^. [A' <7"^-C^-<r*-^

Jean A. Larson
Professor of Mathematics

I certify that I have read this study and that in my opinion it conforms to
acceptable standards of scholarly presentation and is fully adequate, in scope and quality,
as a dissertation for the degree of Doctor of Philosophy.

-tK^L

Rick L. Smith

Associate Professor of Mathematics

I certify that I have read this study and that in my opinion it conforms to
acceptable standards of scholarly presentation and is fully adequate, in scope and quality,
as a dissertation for the degree of Doctor of Philosophy.

O

Jorge Martinez
Professor of Mathematics

I certify that I have read this study and that in my opinion it conforms to
acceptable standards of scholarly presentation and is fully adequate, in scope and quality,
as a dissertation for the degree of Doctor of Philosophy.

Beverly A. Sanders
Associate Professor of Computer and
Information Science and Engineerng

This dissertation was submitted to the Graduate Faculty of the Department of
Mathematics in the College of Liberal Arts and Sciences and to the Graduate School and
was accepted as partial fulfillment of the requirements for the degree of Doctor of
Philosophy.

May 2000 Dean, Graduate School

LD

- .

1

20 CL

t \lW

UNIVERSITY OF FLORIDA

3 1262 08555 3393

```